Re: Cloning WeakSet/WeakMap

2018-02-09 Thread David Bruant
2018-02-09 10:05 GMT-05:00 Michał Wadas <michalwa...@gmail.com>:

> English isn't my native language, so I probably made a mistake.
>
oh ok, sorry for my misinterpretation


> I was asked to add WeakSet.prototype.union(iterable) creating new WeakSet
> instance including data from both iterable and original WeakSet.
>
ok, I don't have an opinion on this idea

David


>
>
>
> On 9 Feb 2018 4:01 pm, "David Bruant" <bruan...@gmail.com> wrote:
>
>> Hi,
>>
>> My understanding is that cloning a WeakSet into a Set would remove all
>> its properties related to security and garbage collection.
>>
>> The properties related to security and garbage collection of WeakSet are
>> based on the fact that its elements are not enumerable by someone who would
>> only be holding a reference to the WeakSet. If you want to "clone" a
>> WeakSet into a Set it means you have an expectation that the set of
>> elements are deterministically enumerable.
>>
>> WeakSets and Sets, despite there close name and API, are used in
>> different circumstances.
>>
>> David
>>
>>
>> 2018-02-09 9:53 GMT-05:00 Michał Wadas <michalwa...@gmail.com>:
>>
>>> Hi.
>>>
>>> I was asked to include a way to clone WeakSet in Set builtins proposal.
>>> Is there any consensus on security of such operation?
>>>
>>> Michał Wadas
>>>
>>> ___
>>> es-discuss mailing list
>>> es-discuss@mozilla.org
>>> https://mail.mozilla.org/listinfo/es-discuss
>>>
>>>
>>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Cloning WeakSet/WeakMap

2018-02-09 Thread David Bruant
Hi,

My understanding is that cloning a WeakSet into a Set would remove all its
properties related to security and garbage collection.

The properties related to security and garbage collection of WeakSet are
based on the fact that its elements are not enumerable by someone who would
only be holding a reference to the WeakSet. If you want to "clone" a
WeakSet into a Set it means you have an expectation that the set of
elements are deterministically enumerable.

WeakSets and Sets, despite there close name and API, are used in different
circumstances.

David


2018-02-09 9:53 GMT-05:00 Michał Wadas :

> Hi.
>
> I was asked to include a way to clone WeakSet in Set builtins proposal. Is
> there any consensus on security of such operation?
>
> Michał Wadas
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array Comprehensions

2017-02-07 Thread David Bruant

Le 06/02/2017 à 17:59, Ryan Birmingham a écrit :

Hello all,

I frequently find myself desiring a short array or generator 
comprehension syntax. I'm aware that there are functional ways around 
use of comprehension syntax, but I personally (at least) love the 
syntax in the ES reference 
(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Array_comprehensions).


The best previous discussion on this that I found was six years old 
(https://esdiscuss.org/topic/array-comprehensions-shorter-syntax) and 
answers some of my questions, raising others. That said, I wanted to ask:


  * Why is the Comprehension Syntax in the reference yet not more
standard? It feels almost like a tease.


Proposals to change the standard are listed here :
https://github.com/tc39/proposals
The process for a feature to become standard is described here :
https://tc39.github.io/process-document/


  * How do you usually approach or avoid this issue?
  * Do you think we should look at improving and standardizing the
comprehension syntax?

Some might argue it is yet another instance of "superficial sugar 
obsession" [1] :-p I don't know where I stand personally.


In any case, if you want to start, write down a proposal (can be 20 
lines in a gist [2]) including programs that are hard to express in 
JavaScript and which readability would significantly be improved with 
the new syntax.
Perhaps submit it to the mailing-list and try to find a "TC39 champion" 
(criterion to enter stage 1).
At the very least, the proposal will be listed in the stage 0 proposals 
list [3].


David

[1] https://twitter.com/mikeal/status/828674319651786754
[2] http://gist.github.com/
[3] https://github.com/tc39/proposals/blob/master/stage-0-proposals.md
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Has there been any discussions around standardizing socket or file io usage?

2016-06-17 Thread David Bruant

Hi Kris,

Le 17/06/2016 06:44, Kris Siegel a écrit :
I didn't see this in the archives but I was curious if any 
consideration has been given for standardizing on features more 
commonly found in most other language's standard library.


For example reading and writing to sockets in JavaScript requires 
platform specific libraries and works very differently between them. 
The same goes for file io (which would obviously need restrictions 
when run in, say, a web browser).


Building these in would make JavaScript more universal and easier to 
learn (you learn one way to access a resource instead of 2 or 3 very 
different ways).


I would be happy to work on a proposal for such changes if they were 
desired by the community. Thoughts?
I understand your motivation, but I believe standardisation isn't the 
right avenue for the problem you describe to be solved.


Specifically, even if there was a standard why would Node or browser 
makers implement it given they already have an API for the job and lots 
of code is already written on top of these APIs?


Writing a standard is not a guarantee for implementation. Implementing 
something is lots of work for browser vendors and Node.js (and they're 
not in shortage of things to do), so they usually need some confidence 
that the new thing adds enough value to be worth the cost.
One way to convey such confidence can be to start the work, implement it 
as a library on top of current APIs, show that there is adoption by lots 
of people. Adoption is usually is an excellent proxy for value. That's 
how we got document.querySelectorAll (via jQuery) and Promise (via the 
gazillion promise libraries and Promise/A+ spec) for instance.


In this case, from experience reading proposals on standards 
mailing-list come and go, I doubt this will be of interest to enough 
people to be worth it. But that's just my own opinion and I would love 
to be proven wrong.


One more thing to regret, maybe https://www.youtube.com/watch?v=7eNFQqMSxtU

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: PRNG - currently available solutions aren't addressing many use cases

2015-12-01 Thread David Bruant

Le 01/12/2015 20:20, Michał Wadas a écrit :


As we all know, JavaScript as language lacks builtin randomness 
related utilities.
All we have is Math.random() and environment provided RNG - 
window.crypto in browser and crypto module in NodeJS.

Sadly, these APIs have serious disadvantages for many applications:

Math.random
- implementation dependant
- not seedable
- unknown entropy
- unknown cycle
(...)

I'm surprised by the level of control you describe (knowing the cycle, 
seeding, etc.). If you have all of this, then, your PRNG is just a 
deterministic function. Why generating numbers which "look" random if 
you want to control how they're generated?



window.crypto
- not widely known

This is most certainly not a good reason to introduce a new API.

As we can see, all these either unreliable or designed mainly for 
cryptography.


That's we need easy to use, seedable random generator

Can you provide use cases the current options you listed make impossible 
or particularly hard?




Why shouldn't it be provided by library?

- average developer can't and don't want to find and verify quality of 
library - "cryptography is hard" and math is hard too


A library or a browser implementation would both need to be "validated" 
by a test suite verifying some statistical properties. My point is that 
it's the same amount of work to validate the "quality" of the 
implementation.



- library size limits it usability on Web


How big would the library be?
How much unreasonable would it be compared to other libraries for other 
use cases?
I'm not an expert on the topic, but of the few things I know, it's hard 
to imagine a PRNG function would be more than 10k


- no standard interface for PRNG - library can't be replaced as 
drop-in replacement


We've seen in the past that good libraries become de-facto standard (at 
the library level, not the platform level) and candidate to being 
shimmed when the library is useful and there is motivation for a drop-in 
replacement (jQuery > Zepto, underscore > lodash). This can happen.
We've also seen ES Promises respect the Promise A+ spec or close enough 
if they don't (I'm not very familiar with the details).


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: An update on Object.observe

2015-11-03 Thread David Bruant

Hi,

Le 03/11/2015 12:26, Alexander Jones a écrit :
In my opinion, the fundamental record type we build our JS on should 
be getting dumber, not smarter. It feels inappropriate to be piling 
more difficult-to-reason-about mechanismson top before reeling in 
exotic host objects.
JS objects were never only the record you're talking about. They were 
also used for OOP (used as dynamic this values if one property was a 
function and called after a dot).
And DOM objects also exposed things that did not have equivalent in ES 
objects (aside from the easy "host objects" escape), so the language 
needed to catch up (as it did in ES5) despite having to be more 
difficult to reason about.


Immutable data structures might be what you're looking for though
https://github.com/sebmarkbage/ecmascript-immutable-data-structures

With Proxy out of the bag, I'm not so hopeful for the humble Object 
anymore.
This is a surprising statement. By exposing the low-level object API as 
userlang API (proxy traps + Reflect API), proxies make the low-level 
object API subject to the same backward-compat constraints as every 
other API.
If nothing else, the very existence of proxies puts an end to the 
evolution of the object model.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ECMAScript 2015 is now an Ecma Standard

2015-06-17 Thread David Bruant

Lots of the changes were long awaited. ES2015 is an important milestone.
More important even is the momentum and recent changes to the way people 
can contribute to the standard.


Thank you to everyone involved in making all of this happen!

David

Le 17/06/2015 17:46, Allen Wirfs-Brock a écrit :
Ecma international has announced that its General Assembly has 
approved ECMA-262-6 /The ECMAScript 2015 Language Specification/ as an 
Ecma standard http://www.ecma-international.org/news/index.html


The official document is now available from Ecma in HTML at
http://www.ecma-international.org/ecma-262/6.0

and as a PDF at
http://www.ecma-international.org/ecma-262/6.0/ECMA-262.pdf

I recommend that  people immediately start using the  Ecma HTML 
version in discussion where they need to link references to sections 
of the specification.


Allen



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: How would we copy... Anything?

2015-02-23 Thread David Bruant

Hi,

Le 23/02/2015 10:10, Michał Wadas a écrit :

Cloning objects is long requested feature.
clone object javascript yields 1 480 000 results in Google.

I'd like to share this as an answer
http://facebook.github.io/immutable-js/#the-case-for-immutability
If an object is immutable, it can be copied simply by making another 
reference to it instead of copying the entire object. Because a 
reference is much smaller than the object itself, this results in memory 
savings and a potential boost in execution speed for programs which rely 
on copies (such as an undo-stack).


```js
var map1 = Immutable.Map({a:1, b:2, c:3});
var clone = map1;
```

Despite people *saying* all over the Internet they want cloning, maybe 
they want immutability?



My proposition is to create a new well known Symbol - Symbol.clone and
corresponding method on Object - Object.clone.

Default behavior for an object is to throw on clone try.
Object.prototype[Symbol.clone] = () = { throw TypeError; }
Users are encorauged to define their own Symbol.clone logic.

Primitives are cloned easily.
Number.prototype[Symbol.clone] = String.prototype[Symbol.clone] =
Boolean.prototype[Symbol.clone] = function() {return this.valueOf();}

Primitives are immutable, no need to clone them.
If you're referring to primitive objects, it might be better to forget 
about this weird corner of the language than polish it.


Back to something you wrote above:

Users are encorauged to define their own Symbol.clone logic.
Perhaps this cloning protocol can be purely implemented in userland as a 
library and doesn't need support from the language. That's one of the 
reasons symbols have been introduced after all.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


State of the Loader API

2015-02-23 Thread David Bruant

Hi,

I was trying to find the module Loader in the latest draft, but found 
out that it's been removed from it [1][2].
YK: The loader pipeline will be done in a living spec (a la HTML5) 
so that Node and the browser can collaborate on shared needs.

I haven't been able to find this new document yet.

The module loader wiki page [3] (is the wiki any relevant for anything 
else than historical reasons at this point?) points to the ES6 spec.


On the topic, I have found these :
https://gist.github.com/dherman/7568080
https://github.com/jorendorff/js-loaders
https://github.com/tc39/tc39-notes/blob/master/es6/2015-01/interfacing-with-loader-spec.pdf

What are the reference documents on module loader?

Thanks,

David

[1] 
https://github.com/rwaldron/tc39-notes/blob/b1af70ec299e996a9f1e2e34746269fbbb835d7e/es6/2014-09/sept-25.md#conclusionresolution-1
[2] 
https://github.com/rwaldron/tc39-notes/blob/844dfbcb87d66f3f8f1222ccb6f4a41e2ed4afd0/es6/2014-11/nov-18.md#41-es6-draft-status-update

[3] http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Object.freeze(Object.prototype) VS reality

2015-02-19 Thread David Bruant

Hi,

Half a million times the following meta-exchange happened on es-discuss:
- if an attacker modifies Object.prototype, then you're doomed in all 
sorts of ways

- Don't let anyone modify it. Just do Object.freeze(Object.prototype)!

I've done it on client-side projects with reasonable success. I've just 
tried on a Node project and lots of dependencies started throwing 
errors. (I imagine the difference is that in Node, it's easy to create 
projects with a big tree of dependencies which I haven't done too much 
on the client side).


I tracked down a few of these errors and they all seem to relate to the 
override mistake [1].
* In jsdom [2], trying to add a constructor property to an object 
fails because Object.prototype.constructor is configurable: false, 
writable: false
* in tough-cookie [3] (which is a dependency of the popular 'request' 
module), trying to set Cookie.prototype.toString fails because 
Object.prototype.toString is configurable: false, writable: false


Arguably, they could use Object.defineProperty, but they won't because 
it's less natural and it'd be absurd to try to fix npm. The 
Cookie.prototype.toString case is interesting. Of all the methods being 
added, only toString causes a problem. Using Object.defineProperty for 
this one would be an awkward inconsistency.



So, we're in a state where no module needs to modify Object.prototype, 
but I cannot freeze it because the override mistake makes throw any 
script that tries to set a toString property to an object.
Because of the override mistake, either I have to let Object.prototype 
mutable (depite no module needing it to be mutable) or freeze it first 
hand and not use popular modules like jsdom or request.


It's obviously possible to replace all built-in props by accessors [4], 
of course, but this is a bit ridiculous.
Can the override mistake be fixed? I imagine no web compat issues would 
occur since this change is about throwing less errors.


David

[1] http://wiki.ecmascript.org/doku.php?id=strawman:fixing_override_mistake
[2] 
https://github.com/tmpvar/jsdom/blob/6c5fe5be8cd01e0b4e91fa96d025341aff1db291/lib/jsdom/utils.js#L65-L95
[3] 
https://github.com/goinstant/tough-cookie/blob/c66bebadd634f4ff5d8a06519f9e0e4744986ab8/lib/cookie.js#L694
[4] 
https://github.com/rwaldron/tc39-notes/blob/c61f48cea5f2339a1ec65ca89827c8cff170779b/es6/2012-07/july-25.md#fix-override-mistake-aka-the-can-put-check

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Sharing a JavaScript implementation across realms

2015-01-13 Thread David Bruant

Le 13/01/2015 13:21, Anne van Kesteren a écrit :

A big challenge with self-hosting is memory consumption. A JavaScript
implementation is tied to a realm and therefore each realm will have
its own implementation. Contrast this with a C++ implementation of the
same feature that can be shared across many realms. The C++
implementation is much more efficient.
Why would a JS implementation *has to* be tied to a realm? I understand 
if this is how things are done today, but does it need to be?
Asked differently, what is so different about JS (vs C++) as an 
implementation language?
It seems like the sharings that are possible in C++ should be possible 
in JS.

What is (or can be) shared in C++ that cannot in JS?


PS: Alternative explanation available here:
https://annevankesteren.nl/2015/01/javascript-web-platform

From your post :
More concretely, this means that an implementation of 
|Array.prototype.map| in JavaScript will end up existing in each 
realm, whereas an identical implementation of that feature in C++ will 
only exists once.
Why? You could have a single privileged-JS implementation and each 
content-JS context (~realm) would only have access to a proxy to 
Array.prototype.map (transparently forwarding calls, which I imagine can 
be optimized/inlined by engines to be the direct call in the optimistic 
case). It would cost a proxy per content JS, but that already much much 
less than a full Array.prototype.map implementation.
In a hand-wavy fashion, I'd say the proxy handler can be shared across 
all content-JS. There is per-content storage to be created (lazily) in 
case Array.prototype.map is mutated (property added, etc.), but the 
normal case is fine (no mutation on built-ins means no cost)


One drawback is trying Object.freeze(Array.prototype.map). For this to 
work with proxies as they are, either the privileged-JS 
Array.prototype.map needs to be frozen (unacceptable, of course), or 
each proxy needs a new target (which is equivalently bad than one 
Array.prototype.map implementation per content-JS context).
The solution might be to allow proxies in privileged-JS contexts that 
are more powerful than the standard ones (for instance, they can pretend 
the object is frozen even when the underlying target isn't).


This is a bit annoying as a suggestion, because it means JS isn't really 
implemented in normal JS any longer, but it sounds like a reasonable 
trade-off (that's open for debate, of course).
The problem with proxies as they are today is that they were 
retroffited in JS which severely constrained their design making use 
cases like the one we're discussing (or even membranes) possible, but 
cumbersome.

Privileged-JS taking some liberties from this design sounds reasonable.

(It was pointed out to me that SpiderMonkey has some tricks to share 
the bytecode of a JavaScript implementation of a feature across 
realms, though not across threads (still expensive for workers). And 
SpiderMonkey has the ability to load these JavaScript implementations 
lazily and collect them when no longer used, further reducing memory 
footprint. However, this requires very special code that is currently 
not available for features outside of SpiderMonkey. Whether that is 
feasible might be up for investigation at some point.) 
For contexts running in parallel to be able to share (read-only) data in 
JS, we would need immutable data structures in JS, I believe.

https://mail.mozilla.org/pipermail/es-discuss/2014-November/040218.html
https://mail.mozilla.org/pipermail/es-discuss/2014-November/040219.html

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.forEach() et al with additional parameters

2014-12-22 Thread David Bruant

Le 20/12/2014 13:47, Gary Guo a écrit :

bindParameter function is not very hard to implement:
```
Function.prototype.bindParameter=function(idx, val){
var func=this;
return function(){
var arg=Array.prototype.slice.call(arguments);
arg[idx]=val;
func.apply(this, arg);
}
}
```

It's even easier if you use bind ;-)

Function.prototype.bindParameter = function(...args){
return this.bind(undefined, ...args)
}

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread David Bruant

Le 04/12/2014 09:55, Andreas Rossberg a écrit :

On 4 December 2014 at 00:54, David Bruant bruan...@gmail.com wrote:

The way I see it, data structures are a tool to efficiently query data. They
don't *have* to be arbitrarily mutable anytime for this purpose.
It's a point of view problem, but in my opinion, mutability is the problem,
not sharing the same object. Being able to create and share structured data
should not have to mean it can be modified by anyone anytime. Hence
Object.freeze, hence the recent popularity of React.js.

I agree, but that is all irrelevant regarding the question of weak
maps, because you cannot freeze their content.
The heart of the problem is mutability and .clear is a mutability 
capability, so it's relevant. WeakMap are effectively frozen for some 
bindings if you don't have the keys.



So my question stands: What would be a plausible scenario where
handing a weak map to an untrusted third party is not utterly crazy to
start with?
Sometimes you call functions you don't have written and pass arguments 
to them. WeakMaps are new, but APIs will have functions with WeakMaps as 
arguments. I don't see what's crazy. It'd be nice if I don't have to 
review all NPM packages I use to make sure they dont use .clear when I 
pass a weakmap.
If you don't want to pass the WeakMap directly, you have to create a new 
object just in case (cloning or wrapping) which carries its own 
obvious efficiency. Security then comes at the cost of performance while 
both could have been achieved if the same safe-by-default weakmap can be 
shared.



In particular, when can giving them the ability to clear
be harmful, while the ability to add random entries, or attempt to
remove entries at guess, is not?

I don't have an answer to this case, now.
That said, I'm uncomfortable with the idea of seeing a decision being 
made that affects the language of the web until its end based on the 
inability of a few person to find a scenario that is deemed plausible by 
few other persons within a limited timeframe. It's almost calling for an 
I told you so one day.

I would return the question: can you demonstrate there are no such scenario?

We know ambiant authority is a bad thing, examples are endless in JS.
The ability to modify global variable has been the source of bugs and 
vulnerabilities.
JSON.parse implementations were modified by browsers because they used 
malicious versions of Array as a constructor which led to data leakage.
WeakMap.prototype.clear is ambiant authority. Admittedly, its effects 
are less broad and its malicious usage is certainly more subtle.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-03 Thread David Bruant

Le 03/12/2014 16:26, Jason Orendorff a écrit :

On Wed, Dec 3, 2014 at 8:35 AM, Andreas Rossberg rossb...@google.com wrote:

(Back to the actual topic of this thread, you still owe me a reply
regarding why .clear is bad for security. ;) )

I'd like to hear this too, just for education value.
Unlike Map.prototype.clear, WeakMap.prototype.clear is a capability that 
cannot be userland implemented.
With WeakMap.prototype.clear, any script can clear any weakmap even if 
it knows none of the weakmap keys.
A script which builds a weakmap may legitimately later assume the 
weakmap is filled. However, passing the weakmap to a mixed-trusted 
(malicious or buggy) script may result in the weakmap being cleared (and 
break the assumption of the weakmap being filled and trigger all sorts 
of bugs). Like all dumb things, at web-scale, it will happen.
WeakMap.prototype.clear is ambiant authority which necessity remains to 
be proven.


It remains possible to create clearless weakmaps to pass around (by 
wrapping a weakmap, etc.), but it makes security (aka code robustness) 
an opt-in and not the default.


Opt-ins are cool, but are often forgotten, like CSP, like use strict, 
like cookie HttpOnly, like HTTPS, you know the list :-) It would be cool 
if they were by default and people didn't have to learn about them all.


Security by default is cooler in my opinion.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-03 Thread David Bruant

Le 03/12/2014 19:10, Jason Orendorff a écrit :

On Wed, Dec 3, 2014 at 9:04 AM, David Bruantbruan...@gmail.com  wrote:

A script which builds a weakmap may legitimately later assume the weakmap is
filled. However, passing the weakmap to a mixed-trusted (malicious or buggy)
script may result in the weakmap being cleared (and break the assumption of
the weakmap being filled and trigger all sorts of bugs). Like all dumb
things, at web-scale, it will happen.

OK. I read the whole thing, and I appreciate your writing it.

There's something important that's implicit in this argument that I
still don't have yet. If you were using literally any other data
structure, any other object, passing a direct reference to it around
to untrusted code would not only be dumb, but obviously something the
ES spec should not try to defend against. Right? It would be goofy.
Object.freeze and friends were added to the ES spec for the very purpose 
of being able to pass direct reference to an object and defend against 
unwanted mutations. à propos d'une

Is Object.freeze goofy?


The language just is not that hardened. Arguably, the point of a data
structure is to be useful for storing data, not to be secure against
code that **has a direct reference to it**. No?
The way I see it, data structures are a tool to efficiently query data. 
They don't *have* to be arbitrarily mutable anytime for this purpose.
It's a point of view problem, but in my opinion, mutability is the 
problem, not sharing the same object. Being able to create and share 
structured data should not have to mean it can be modified by anyone 
anytime. Hence Object.freeze, hence the recent popularity of React.js.



So what's missing here is, I imagine you must see WeakMap, unlike all
the other builtin data structures, as a security feature.
I'm not sure what you mean by security feature. Any API is a security 
feature of sort.



Specifically, it must be a kind of secure data structure where
inserting or deleting particular keys and values into the WeakMap does
*not* pose a threat, but deleting them all does.

Can you explain that a bit more?
I see the invariant you're talking about, I agree it's elegant, but to
be useful it also has to line up with some plausible security use case
and threat model.
The ability to clear any WeakMap anytime needs to be equally justified 
in my opinion. I'm curious about plausible use cases.


What about making 'clear' an own property of weakmaps and make it only 
capable of clearing the weakmap it's attached to?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-02 Thread David Bruant

Hi,

I feel like I've been in an equivalent discussion some time ago, so 
taking the liberty to answer.


Le 02/12/2014 13:59, Andreas Rossberg a écrit :

On 1 December 2014 at 03:12, Mark S. Miller erig...@google.com wrote:

On Sun, Nov 30, 2014 at 12:21 PM, Boris Zbarsky bzbar...@mit.edu wrote:

Per spec ES6, it seems to me like attempting to define a non-configurable
property on a WindowProxy should throw and getting a property descriptor for
a non-configurable property that got defined on the Window (e.g. via var)
should report it as configurable.

Can you clarify? Do you mean that it should report properties as
configurable, but still reject attempts to actually reconfigure them?
Yes. This is doable with proxies (which the WindowProxy object needs to 
be anyway).

* the defineProperty trap throws when it sees configurable:false
* the getOwnPropertyDescriptor trap always reports configurable:true
* and the target has all properties actually configurable (but it's 
almost irrelevant to the discussion)



Also, how would you allow 'var' to even define non-configurable
properties? If you want DefineProperty to throw on any such attempt,
then 'var' semantics would somehow have to bypass the MOP.
Thinking in terms of proxies, the runtime can have access to the target 
and the handler while userland scripts only have access to the proxy 
(which the HTML Living standard mandates anyway with the difference 
between Window and WindowProxy objects. No userland script ever have 
access to the Window object).
The handler can have access to the list all declared variable to know 
which property should behave as if non-configurable.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-02 Thread David Bruant

Le 02/12/2014 14:24, David Bruant a écrit :

Hi,

I feel like I've been in an equivalent discussion some time ago

The topic felt familiar :-p
http://lists.w3.org/Archives/Public/public-script-coord/2012OctDec/0322.html

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxies as prototypes

2014-11-23 Thread David Bruant

Le 23/11/2014 07:41, Axel Rauschmayer a écrit :
I’d expect the following code to log `GET bla`, but it currently 
doesn’t in Firefox. That’s because the Firefox implementation of 
proxies isn’t finished yet, right?
Yes. That would be https://bugzilla.mozilla.org/show_bug.cgi?id=914314 I 
think.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.isArray(new Proxy([], {})) should be false (Bug 1096753)

2014-11-15 Thread David Bruant

Le 13/11/2014 17:29, Boris Zbarsky a écrit :

On 11/13/14, 6:44 AM, Andreas Rossberg wrote:

Well, the actual diabolic beast and universal foot gun in this example
is setPrototypeOf. ;)


Note that there is at least some discussion within Mozilla about 
trying to make the prototype of Object.prototype immutable (such that 
Object.getPrototypeOf(Object.prototype) is guaranteed to always return 
the same thing, modulo someone overriding Object.getPrototypeOf), 
along with a few other things along those lines.  See 
https://bugzilla.mozilla.org/show_bug.cgi?id=1052139.
This would result in objects which [[Prototype]] cannot be changed but 
which properties can be changed.
This is not possible per ES6 semantics I believe unless the object is a 
proxy (which setPrototypeOf trap throws unconditionally and forwards the 
rest to the target). Is it a satisfactory explanation? Should new 
primitives be added?



Whether this is web-compatible, we'll see.

I guess my above questions can wait the answer to this part.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.isArray(new Proxy([], {})) should be false (Bug 1096753)

2014-11-13 Thread David Bruant

The best defense is Object.freeze(Object.prototype);
No application worth considering needs to arbitrarily modify 
Object.prototype at an arbitrary point in time (or someone should bring 
a use case for discussion). It usually shouldn't and even if it does, it 
should do it at startup and freeze it afterwards.


Le 13/11/2014 12:25, Andrea Giammarchi a écrit :

well, Proxy can be a diabolic beast

```js
Object.setPrototypeOf(
  Object.prototype,
  new Proxy(Object.prototype, evilPlan)
)
```

having no way to understand if an object is a Proxy looks like a 
footgun to me in the long term, for libraries, and code alchemists
You're giving guns to people and try to evaluate how to defend from 
them. Consider not letting guns around the rooms ;-)


David

You indeed wrote that different Array methods need to know if there's 
a Proxy in there ... if dev cannot know the same via code they are 
unable again to subclass properly or replicate native behaviors behind 
magic internal checks.


If there is a way and I'm missing it, then it's OK

Regards








On Thu, Nov 13, 2014 at 7:15 AM, Tom Van Cutsem tomvc...@gmail.com 
mailto:tomvc...@gmail.com wrote:


2014-11-12 23:49 GMT+01:00 Andrea Giammarchi
andrea.giammar...@gmail.com mailto:andrea.giammar...@gmail.com:

If Array.isArray should fail for non pure Arrays, can we
have a Proxy.isProxy that never fails with proxies ?


We ruled out `Proxy.isProxy` very early on in the design. It's
antithetical to the desire of keeping proxies transparent. In
general, we want to discourage type checks like you just wrote.

If you're getting handed an object you don't trust and need very
strong guarantees on its behavior, you'll need to make a copy.
This is true regardless of proxies. In your example, even if the
array is genuine, there may be some pointer alias to the array
that can change the array at a later time.

Regards,
Tom




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.isArray(new Proxy([], {})) should be false (Bug 1096753)

2014-11-12 Thread David Bruant

Le 12/11/2014 17:23, Tom Van Cutsem a écrit :
I agree with your sentiment. I have previously advocated that 
Array.isArray should be transparent for proxies. My harmony-reflect 
shim explicitly differs from the spec on this point because people 
using the shim spontaneously reported this as the expected behaviour 
and thought it was a bug that Array.isArray didn't work transparently 
on proxies.

For reference https://github.com/tvcutsem/harmony-reflect/issues/13

As far as I can remember, the argument against making Array.isArray 
transparent is that it's ad hoc and doesn't generalize to other types 
/ type tests. My opinion is that array testing is fundamental to core 
JS and is worth the exception.

Agreed. Author usability should trump language purity.

David



Regards,
Tom

2014-11-12 17:04 GMT+01:00 Axel Rauschmayer a...@rauschma.de 
mailto:a...@rauschma.de:


The subject is a SpiderMonkey bug.

Is that really desirable? Doesn’t it invalidate the Proxy’s role
as an interceptor?

-- 
Dr. Axel Rauschmayer

a...@rauschma.de mailto:a...@rauschma.de
rauschma.de http://rauschma.de




___
es-discuss mailing list
es-discuss@mozilla.org mailto:es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Immutable collection values

2014-11-09 Thread David Bruant

Le 09/11/2014 15:07, Jussi Kalliokoski a écrit :
I figured I'd throw an idea out there, now that immutable data is 
starting to gain mainstream attention with JS and cowpaths are being 
paved. I've recently been playing around with the idea of introducing 
immutable collections as value types (as opposed to, say, instances of 
something).


So at the core there would be three new value types added:

* ImmutableMap.
* ImmutableArray.
* ImmutableSet.

Why would both Array and Set be needed?


We could also introduce nice syntactic sugar, such as:

var objectKey = {};

var map = {:
  [objectKey]: foo,
  bar: baz,
}; // ImmutableMap [ [objectKey, foo], [bar, baz] ]

var array = [:
  1,
  1,
  2,
  3,
]; // ImmutableArray [ 1, 2, 3, 4 ]

var set = :
  1,
  2,
  3,
; // ImmutableSet [ 1, 2, 3 ]

The syntax suggestions are up to debate of course, but I think the key 
takeaway from this proposal should be that the immutable collection 
types would be values and have an empty prototype chain.

I find : too discrete for readability purposes. What about # ?
That's what was proposed for records and tuples (which are pretty much 
the same thing as ImmutableMap and ImmutableSet respectively)

http://wiki.ecmascript.org/doku.php?id=strawman:records
http://wiki.ecmascript.org/doku.php?id=strawman:tuples
#SyntaxBikeshed

I think this would make a worthwhile addition to the language, 
especially considering functional compile-to-JS languages. With the 
syntactic sugar, it would probably even render a lot of their features 
irrelevant because the core of JS could provide a viable platform for 
functional programming (of course one might still be happier using 
abstraction layers that provide immutable APIs to the underlying 
platforms, such as DOM, but then that's not a problem in the JS' 
domain anymore).
It would also open the possibility of a new class of postMessage sharing 
(across iframes or WebWorkers) that allows parallel reading of a complex 
data structure without copying.


A use case that would benefit a lot from this would be computation of a 
force-layout algorithm with real-time rendering of the graph.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Event loops in navigated-away-from windows

2014-09-30 Thread David Bruant

Le 29/09/2014 23:08, Anne van Kesteren a écrit :

On Mon, Sep 29, 2014 at 8:18 PM, Ian Hickson i...@hixie.ch wrote:

I certainly wouldn't object to the ES spec's event loop algorithms being
turned inside out (search for RunCode on the esdiscuss thread above for
an e-mail where I propose this) but that would be purely an editorial
change, it wouldn't change the implementations.

The proposed setup from Allen will start failing the moment ECMAScript
wants something more complicated with its loop.

How likely is this?

David


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxy objects and collection

2014-09-02 Thread David Bruant

Le 02/09/2014 20:07, Daurnimator a écrit :
So, I'd like to see some sort of trap that is fired when a Proxy is 
collected.
To prevent over specifying how Javascript garbage collectors should 
operate,
I propose that the trap *may* only be called at some *undefined* point 
after the object is not strongly referenced.
As Brendan said, what you want has been discussed as Weak References on 
the list, not really proxies.


The question of not wanting to over-specify upfront has come in other 
places in the past. Sometimes, even when the spec leaves freedom to 
implementors, it happens that implemetors make some common choices, then 
people rely on the shared browser behavior of that spec-undefined 
functionality. Then, the feature has to be standardized de facto as 
commonly implemented afterwards.


My point here being that not specifying up front does not guarantee that 
the details won't have to be ever specified.

The enumeration order of object keys comes to mind.

I'm not saying that it's what will or even may happen in this case, but 
just remind that leaving things undefined can fire back and generate the 
opposite of what was intended.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Promise.prototype.Finally

2014-08-18 Thread David Bruant

Yes. Needed it recently.
Ended up doing .then(f).catch(f) which can be survived but feels stupid.

David

Le 18/08/2014 21:20, Domenic Denicola a écrit :

Here is the current design for Promise.prototype.finally. I agree it is a 
useful feature.


https://github.com/domenic/promises-unwrapping/issues/18
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Why using the size property in set

2014-07-31 Thread David Bruant

Le 31/07/2014 09:25, Maxime Warnier a écrit :

Hi everybody,

I was reading the doc for the new Set method and something suprised me :

Why Set uses the size method instead of the length property ?
IIRC and with my own words length refers more to something that can be 
measured contiguously (like a distance or a number of allocated bytes, 
etc.) while size doesn't have this contiguous aspect to it.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Reflect.hasOwn() ?

2014-07-27 Thread David Bruant

Le 27/07/2014 13:35, Peter van der Zee a écrit :

On Sat, Jul 26, 2014 at 5:14 PM, Mark S. Miller erig...@google.com wrote:

Hi Peter, what is the security issue you are concerned about?

Unless `Reflect` is completely sealed out of the box, you can never
know whether properties on it are the actual built-ins. That's all.

You can deeply freeze it yourself before any other script accesses it.

Even without doing so, let's say Reflect is not sealed.
If you change it yourself (by code you wrote or imported), you know what 
to expect (or you didn't audit code you import, but them, you also know 
you can only expect the worst).
If you don't change Reflect yourself, then it's third-party code which 
is. But then, why did you let this third-party code access to the 
capability of modifying the built-ins?
You could set up a proxy in your own domain, fetch thrid-party scripts 
from there and serve them to your own domain confined (with Caja or else).


My point being that there are ways to prevent any non-trusted scripts 
from modifying Reflect (assuming you stay away from script@src which 
doesn't allow any form of confinment on the imported script)



For ES6, I'm not clear yet on how the module loader will work with 
regards to cross-domain scripts. I believe part of the web platform 
security model relies on a page not being able to read the content of 
thrid-party scripts it imports via script@src (IIRC because some 
websites send private data based on cookies in such scripts, so being 
able to read the content of such scripts would lead to terrible data 
leakage).

Does the module loader preserves such a guarantee?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [PROPOSAL] use keyword

2014-07-25 Thread David Bruant

Hi,

Le 25/07/2014 20:52, Michaël Rouges a écrit :

Hi all,

There is any plan around un functionnality like the use keyword, in PHP.

Why something like that? Because, in JS, there is no way to inject 
some variables, without touch the this object known in a function.

Can you give an example of what you call injecting some variables?
More generally, can you give a concrete example of what you're trying to 
achieve?


Thanks,

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


TC39 vs the community

2014-06-20 Thread David Bruant

Hi,

I'm not quite sure what this is all about, so forking in hope for 
clarifications.
I'm sorry to send a message that will probably be read as noise by a lot 
of people, but I'm also tired of some of these pointless and 
unconstructive, if not destructive, fights among people (in here, on 
Twitter or elsewhere).
I hope to have a conversation to start the end of the alleged 
unharmonious relationship between TC39 and JS developers.


Domenic, your email suggests a fairly strong dichotomy between TC39 
and the community. As far as I'm concerned, to begin with, I don't see 
anything that is called the community in JavaScript. I join Axel's 
point of view. I see lots of communities with different backgrounds and 
interests, especially among the JS devs.
I personnally don't feel associated with the community you describe. I 
encourage you to either speak only for yourself or provide a more 
specific description of whose point of view you're referring to 
(preferably without a definite article).


Le 19/06/2014 21:13, Domenic Denicola a écrit :

Unfortunately, that's not the world we live in, and instead TC39 is designing a 
module system based on their own priorities. (Static checking of multi-export 
names, mutable bindings, etc.)
If I knew nothing about how ES standardization works, I'd be thinking 
who the fuck are these TC39 people who decide features based on their 
own agenda against the interest/experience of the developers? Who do 
they think they are anyway?


Can you develop these particular accusations?
Why would TC39 have priorities that don't align with the needs of 
developers? especially on modules which are clearly one of the most 
awaited feature as far as developers are concerned?


I'm not quite sure I understand the dichotomy and the alleged TC39 
priorities that would be that far off from what JS devs otherwse need, 
so please get it off your chest so we can all move on.



They've (wisely) decided to add affordances for the community's use cases, such as layering default 
exports on top of the multi-export model. As well as Dave's proposal in this thread to de-grossify 
usage of modules like fs. By doing so, they increase their chances of the module system 
being good enough for the community, so that the path of least resistance will be to 
adopt it, despite it not being designed for them primarily. It's still an open question whether 
this will be enough to win over the community from their existing tools, but with Dave's suggestion 
I think it has a better-than-even chance.

The transitional era will be a particularly vulnerable time for TC39's module design, however: as 
long as people are using transpilers, there's an opportunity for a particularly well-crafted, 
documented, and supported transpiler to give alternate semantics grounded in the community's 
preferred model, and win over enough of an audience to bleed the life out of TC39's modules. We 
already see signs of community interest in such ES6+ transpilers, as Angular 
illustrates. Even a transpiler that maintains a subset of ES6 syntax would work: if it supported 
only `export default x`, and then gave `import { x } from y` destructuring semantics 
instead of named-binding-import semantics, that would do the trick. Interesting times.
Whatever TC39 settles in and is eventually part of the standard will 
inevitably have tooling associated to it. Maybe not by the community 
(whoever that is), but I'm fairly certain TypeScript will adopt it for 
instance. I'm fairly sure IDEs will all eventually have syntactic or 
intelligent support of the official standard modules (which is less 
clear for whatever-transpiler-modules).
Some people who aren't part of the community will write code in ES6 
modules. Whatever they end up being, I'll probably be on that end pretty 
much for the same reason I choose to not write coffeescript (because 
AFAIC my own taste in code has less worth than other's ability to 
understand the code I write).


Whatever they end up looking and behaving, ES6 modules will happen with 
the community or without it.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 5 June 2014 TC39 Meeting Notes

2014-06-14 Thread David Bruant

Le 12/06/2014 16:43, Domenic Denicola a écrit :
Also, David: modules are not named; you cannot import them. Check 
out 
https://github.com/dherman/web-modules/blob/master/module-tag/explainer.md

Thanks, that's the context I was missing.

I'm uncomfortable with the async part of the proposal as currently 
(under?)specified. Sharing my thought process.


Async loading prevents the rendering blocking problem, but creates 
another problem.
async loading isn't an end in and of itself. As far as I'm concerned, I 
never use script@async for app initialization code (which is the target 
of the script type=module proposal) because it offers no guarantee 
on whether the script will be executed before or after the HTML is fully 
parsed.
I'm a big fan of script@defer though, because I have a clear idea of 
loading order (which will be covered by modules, so unimportant for the 
topic at hand) as well as when the script will be executed (when the 
HTML is fully parsed and DOM is complete, but before the 
DOMContentLoaded event)


I'm extremely interested in how other devs use the @async attribute in 
practice. In the context of an application, scripts that have no 
temporal dependency with other scripts loaded in the same document are 
rare beasts.


Back to script type=module, I'm not sold on arbitrary async 
loading if it forces me to add this boilerplate:

// assuming function loadApp(){}
if(document.readyState === loading)
document.addEventListener('DOMContentLoaded', loadApp)
else
loadApp();

A @defer semantics for script type=module might make more sense and 
not force all devs to add the above boilerplate to make sure their code 
loading is robust to the laws of physics.
If people want to execute scripts before the HTML is fully parsed they 
can just use regular script.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 4 June 2014 TC39 Meeting Notes

2014-06-12 Thread David Bruant

Le 11/06/2014 18:08, Ben Newman a écrit :

https://gist.github.com/annevk/3db3fbda2b95e5ae9427

AWB: Should we try to replace WebIDL? (fourth bullet point from the 
gist above)

For what purpose? Replacing WebIDL isn't an end in itself.
Who would be the target of this replacement? Spec writers (TC39 or 
W3C)?  authors? Implementors? All of these together?


DH: Browser implementors love WebIDL, so anything that replaces it has 
to be as convenient as that. YK's idea: the new interface description 
language would prepend Legacy to existing WebIDL types, but still 
support them

+1.


MM: What about a design language that compiles to WebIDL?

DH: Problem: people explicitly argue against better interface design 
because it's not convenient/expressible in WebIDL.


MM: Right, the path of least resistance in WebIDL is not good JavaScript.
Why? (I'm not saying I disagree, but I'm trying to understand what 
WebIDL lacks)
What are people opinions on the path of least resistance in describing 
interfaces in TypeScript?


DH, AR: TypeScript seemed like a way to define signatures of APIs, but 
was solving a different problem.


DH: Need a way to express what kind of implicit conversions are 
applied to passed-in values (something that TypeScript doesn't have).
As far as developers are concerned, it doesn't seem like an issue, so it 
looks like the TypeScript interface language is sufficiently expressive 
for most developer needs.
However, it looks like the notion of interface for standard features 
changes whether it's taken from the point of view of an implementor or 
an author.
Implementors have an imperative of interoperability with legacy APIs 
which is a constraint authors don't have.


YK: Also want to be able to express APIs in terms of function/method 
overloading (different behaviors for different input types), which is 
more like TypeScript than WebIDL.


AWB: If no work happens to build a better IDL, we'll be stuck with the 
status quo.


YK: Want to be able to describe `PromiseT` result types as such, 
rather than `{ then: ???, catch: ??? }`
I want to agree, but IIRC thenables are considered like promises by 
built-in algorithms, so apparently, the consensus is not that people 
want a `PromiseT` type as such separate from {then, catch?}.



SK: Willing to start working on a new IDL design, with help.

DH: Want to capture duality between Array.isArray arrays and 
array-like objects, and instanceof-Promise objects vs. { then: 
Function } objects.


SK: Can we improve whatever was lacking about TypeScript?
An annotation system like there is now in WebIDL might be enough an 
addition to express legacy behaviors.


AR, YK: TypeScript types don't mean quite what you think they mean 
(Number, String).
A new interface language could keep the TypeScript syntax and adapt the 
semantics as felt appropriate.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 5 June 2014 TC39 Meeting Notes

2014-06-12 Thread David Bruant

Le 11/06/2014 18:21, Ben Newman a écrit :

## 7.1 script type=module status update (from DH)

DH: Would really rather have moduleimport { foo } from bar; 
.../module, which is like script but async, strict mode, has its 
own top-level scope, and can import declaratively (using ES6 module 
import syntax) from other (named) modules.
Just to be sure I understand, with module (or script type=module), 
the module has to be named? So module never really makes sense on its 
own and should always have a name attribute?


DH: module name=qux creates race conditions with HTML imports 
(part of WebComponents).


YK: People who saw named HTML module tags though you should mix html 
imports w named module imports
YK: When you have packaging solution (SPDY, etc), you no longer need 
named modules

+1

MM: script type=module would inherit the special termination rules 
of /script, whereas old browsers might not handle module the same 
way, since that tag name doesn't mean anything special in old browsers


AR: script type=module means the browser won't even try to parse 
it as JS, which is what we want [so that we can execute the script 
contents as a module, via some sort of polyfill]


DH: script type=worker might also need to have the script 
type=module semantics, and type= attribute syntax makes it hard to 
mix and match those attributes; maybe script worker module would be 
better? (i.e. the type attribute values become optional value-less 
attribute names)


DH: The difference between script type=module and module is that 
as long as there's … you always have the option of writing 
scriptSystem.import(main.js)/script

TODO: Get DH to clarify this point when we edit the notes.

cc'ing Dave Herman for this part.

AR: [note taker (BN) may be misinterpreting] The JS API remains 
important even when we have HTML sugar.
Was this part edited after the misinterpretation or is it the original 
note?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object copy

2014-06-11 Thread David Bruant

Hi Maxime,

Good to see you here :-)

This topic has been discussed recently on Twitter. See
https://twitter.com/jeremyckahn/status/474259042005553154

I'm like Rick's answer in particular
https://twitter.com/rwaldron/status/475017360085364736
as I believe a large share of cloning is just about data

As discussed in this Twitter thread, immutable data structures would be 
an interesting idea too. If an object is guaranteed to be deeply 
immutable, then, it can be passed around without the need for cloning. 
Clones are only necessary because the initial object is mutable in the 
first place.

Immutable data structures have been briefly discussed here recently:
https://mail.mozilla.org/pipermail/es-discuss/2014-June/037429.html
(see replies too)

David

Le 11/06/2014 08:49, Maxime Warnier a écrit :

Thanks for your answers.

Object.assign seems good but provides only copy for enumerable
properties, not a real deep clone.

I know for jquery, that's why i precised only for DOM but it was
just to show the syntax :)

2014-06-11 0:00 GMT+02:00 Rick Waldron waldron.r...@gmail.com:



On Tue, Jun 10, 2014 at 12:32 PM, Maxime Warnier mar...@gmail.com wrote:

Hi All

Do you know if it is planned or maybe in discussion for ES7 to have a
simple clone system on objects ?

There are different notations, from :

  - jquery

Object.clone( [withDataAndEvents ] [, deepWithDataAndEvents ] )


jQuery doesn't clone objects, it clones DOM elements.

Rick





___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: My ECMAScript 7 wishlist

2014-06-06 Thread David Bruant

Le 06/06/2014 01:08, Rick Waldron a écrit :
On Thu, Jun 5, 2014 at 6:42 PM, Nicholas C. Zakas 
standa...@nczconsulting.com mailto:standa...@nczconsulting.com wrote:


* `Object.deepPreventExtensions()`, `Object.deepSeal()`,
`Object.deepFreeze()` - deep versions of
`Object.preventExtensions()`, et al.


Does deep mean that a Map instance's [[MapData]] is frozen if 
deepFreeze is called on a ? eg. what happens here:


var m = Object.deepFreeze(new Map());
m.set(1, 1);
I think the intention behind Object.freeze was to make objects immutable 
(at a shallow level), so maybe the semantics of Map.prototype.set (and 
all modifying operations, of Mapco) should be changed to read the 
[[IsExtensible]] and throw if false is returned. Given Maps are already 
in the wild, this decision might need to be taken quickly.


or should an Object.makeImmutable be introduced? (it would be freeze + 
make all internal [[*Data]] objects immutable)



* `Object.preventUndeclaredGet()` - change an object's behavior to
throw an error if you try to read from a property that doesn't
exist (instead of returning `undefine`).

(I already know that Nicholas and I disagree on the topic, but sharing 
for debating).



This can be achieved with Proxy right, or is that too cumbersome?
Code-readability-wise, wrapping in a proxy is as cumbersome as a call to 
Object.preventUndeclaredGet I guess.


This sort of concerns are only development-time concerns and I believe 
the runtime shouldn't be bothered with these (I'm aware it already is in 
various web). For instance, the TypeScript compiler is capable today of 
catching this error. Given that we have free, cross-platform and fairly 
easy to use tools, do we need assistance from the runtime?


David

[1] https://twitter.com/passy/status/469127322072014849
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: My ECMAScript 7 wishlist

2014-06-06 Thread David Bruant

Le 06/06/2014 15:57, Mark S. Miller a écrit :
By contrast, a Map's state is more like the private instance variable 
state of a closure or a post-ES6 class.
The capabilities to arbitrarily modify Maps (set/delete on all keys, 
with any values) will be expected by any ES6-compliant code to be 
globally available, so a Map's state cannot reasonably be considered 
private.
This differs from the state of a closure where its access is strictly 
moderated by the public API giving access to it and to the fact that 
this API is not provided globally (unlike Map.prototype).


Object.freeze of a Map should not alter the mutability of this state 
for the same reason it does not alter the state captured by a closure 
or a future class instance.
I'd argue the Map state is very much like regular objects (for which you 
can't deny [[Set]], [[Delete]], etc.), not closure's state.


In an ES6 world, denying access to the global Map.prototype.* would 
break legitimate code, so that's not really an option confiners like 
Caja could provide.





or should an Object.makeImmutable be introduced? (it would be
freeze + make all internal [[*Data]] objects immutable)


We do need something like that. But it's a bit tricky. A client of an 
object should not be able to attack it by preemptively deep-freezing 
it against its wishes.

I don't see the difference with shallow-freezing?
It's currently not possible to defend against shallow-freezing (it will 
be possible via wrapping in a proxy).





This can be achieved with Proxy right, or is that too cumbersome?

Code-readability-wise, wrapping in a proxy is as cumbersome as a
call to Object.preventUndeclaredGet I guess.

This sort of concerns are only development-time concerns and I
believe the runtime shouldn't be bothered with these (I'm aware it
already is in various web). For instance, the TypeScript compiler
is capable today of catching this error. Given that we have free,
cross-platform and fairly easy to use tools, do we need assistance
from the runtime?


Yes. Object.freeze is a runtime production protection mechanism, 
because attacks that are only prevented during development don't 
matter very much ;).
Just to clarify, I agree that Object.freeze was necessary in ES5 (have 
we had proxies, it might have been harder to justify?), because there 
was no good alternative to protect an object against the parties it was 
shared with.
But the concern Nicholas raises doesn't seem to have this property. 
Reading a property that doesn't exist doesn't carry a security risk, 
does it? Object.preventUndeclaredGet doesn't really protect against 
anything like ES5 methods did.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: My ECMAScript 7 wishlist

2014-06-06 Thread David Bruant

Le 06/06/2014 17:47, Frankie Bagnardi a écrit :

Couldn't preventUndeclaredGet() be implemented with proxies?
Yes it can. Doing it left as an exercise to the reader... Wait... Don't 
bother, Nicholas did it :-)

http://www.nczonline.net/blog/2014/04/22/creating-defensive-objects-with-es6-proxies/

It actually sounds like an extremely useful feature for development 
builds of libraries and applications.  Typos are very very common, and 
often difficult to look over while debugging.  On the other hand, it 
would break a lot of existing code if you try to pass it as an object 
to a library; you'd have to declare every possible value it might 
check (which isn't necessarily bad).  Most of the time, it's just an 
options object, or an object it'll iterate over the keys of.


Using it on arrays would also reduce off-by-1 errors (though I don't 
see them often in JS).
Ever since I've started using forEach/map/filter/reduce, I haven't had 
an off-by-one error on arrays. Highly recommanded! (I think I've heard 
Crockford making the same recommandation in a recent talk, but I cannot 
find the link)


David






On Fri, Jun 6, 2014 at 7:37 AM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:


Le 06/06/2014 15:57, Mark S. Miller a écrit :

By contrast, a Map's state is more like the private instance
variable state of a closure or a post-ES6 class.

The capabilities to arbitrarily modify Maps (set/delete on all
keys, with any values) will be expected by any ES6-compliant code
to be globally available, so a Map's state cannot reasonably be
considered private.
This differs from the state of a closure where its access is
strictly moderated by the public API giving access to it and to
the fact that this API is not provided globally (unlike
Map.prototype).



Object.freeze of a Map should not alter the mutability of this
state for the same reason it does not alter the state captured by
a closure or a future class instance.

I'd argue the Map state is very much like regular objects (for
which you can't deny [[Set]], [[Delete]], etc.), not closure's state.

In an ES6 world, denying access to the global Map.prototype.*
would break legitimate code, so that's not really an option
confiners like Caja could provide.





or should an Object.makeImmutable be introduced? (it would be
freeze + make all internal [[*Data]] objects immutable)


We do need something like that. But it's a bit tricky. A client
of an object should not be able to attack it by preemptively
deep-freezing it against its wishes.

I don't see the difference with shallow-freezing?
It's currently not possible to defend against shallow-freezing (it
will be possible via wrapping in a proxy).





This can be achieved with Proxy right, or is that too
cumbersome?

Code-readability-wise, wrapping in a proxy is as cumbersome
as a call to Object.preventUndeclaredGet I guess.

This sort of concerns are only development-time concerns and
I believe the runtime shouldn't be bothered with these (I'm
aware it already is in various web). For instance, the
TypeScript compiler is capable today of catching this error.
Given that we have free, cross-platform and fairly easy to
use tools, do we need assistance from the runtime?


Yes. Object.freeze is a runtime production protection mechanism,
because attacks that are only prevented during development don't
matter very much ;).

Just to clarify, I agree that Object.freeze was necessary in ES5
(have we had proxies, it might have been harder to justify?),
because there was no good alternative to protect an object against
the parties it was shared with.
But the concern Nicholas raises doesn't seem to have this
property. Reading a property that doesn't exist doesn't carry a
security risk, does it? Object.preventUndeclaredGet doesn't really
protect against anything like ES5 methods did.

David

___
es-discuss mailing list
es-discuss@mozilla.org mailto:es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: My ECMAScript 7 wishlist

2014-06-06 Thread David Bruant

Le 06/06/2014 18:16, Nicholas C. Zakas a écrit :


On 6/6/2014 8:38 AM, Mark S. Miller wrote:


But the concern Nicholas raises doesn't seem to have this
property. Reading a property that doesn't exist doesn't carry a
security risk, does it? Object.preventUndeclaredGet doesn't
really protect against anything like ES5 methods did.


That's true, but misses the point I was trying to make. For normal ES 
objects, it is already part of their API contract with their clients 
that the clients can do feature testing to detect the presence or 
absence of a method. The most common way to do such feature testing 
is to get the property and see if it is falsy. (Variations include, 
testing for undefined, testing for undefined or null, and testing if 
its typeof is function.) It's fine if the provider of an 
abstraction does not wish to support this pattern. But it is not ok 
for one client of an object which does support it to prevent that 
object's other clients from successfully using feature detection.


Sorry I was sleeping while most of this conversation was happening. :)

I understand the point about feature detection, it would suck if some 
random code did Object.preventUndeclaredGet() on an object you own and 
were using feature detection on. I still wish for some way to do this 
other than through proxies, but I agree that it would be nice for just 
the object provider to be able to set this behavior.

It is possible via an API following the same pattern than revocable proxies:

{object, toggle} = makeUndeclGetThrowObject()
toggle(); // now the object throws when there is a [[Get]] on an 
undef property


Keep the toggle function locally so only trusted parties access it, 
share the object as you wish.

Admittedly more cumbersome than your solution or proxies :-p

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Bytecode

2014-05-19 Thread David Bruant

Le 14/05/2014 19:13, Axel Rauschmayer a écrit :
What is the best “bytecode isn’t everything” article that exists? The 
“the web needs bytecode” meme comes up incredibly often, I’d like to 
have something good to point to, as an answer.


This one looks good: 
http://mozakai.blogspot.de/2013/05/the-elusive-universal-web-bytecode.html
I want to suggest 
https://www.destroyallsoftware.com/talks/the-birth-and-death-of-javascript
I know it's not a direct answer to your question and I know the talk is 
not 100% serious, but it builds on a trend about JavaScript that 
suggests that JavaScript can be good enough as it is and a bytecode 
isn't needed. This talk also contains bit and pieces of knowledge 
helping to understand this trend.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.getOwnPropertyDescriptor can return just about anything

2014-05-09 Thread David Bruant

Le 09/05/2014 08:50, Tom Van Cutsem a écrit :

Rick,

It's true that allowing user-invented custom attributes will not break 
any important existing invariants (except perhaps that all existing 
descriptors can be assumed not to have any own properties besides the 
standard attributes. Existing code may depend on that, although it 
feels highly unlikely).
Just to try to assess the unlikelihood and understand the cases where a 
ES5 code expectations aren't met:


The only case where ES6 and ES5 may diverge is for 
Object.getOwnPropertyDescriptor where a Proxy may return something that 
cannot be expected from any ES5 object.
The after-trap completes the property descriptor (and when completing 
picks specifically only data or accessor property), so code that expects 
a complete property descriptor cannot be broken.
However, a divergence may only occur if, for instance, the code loops 
over the property descriptor properties or expects exactly 4 properties.


Is that correct or am I missing cases?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [[Set]] and inherited readonly data properties

2014-03-26 Thread David Bruant

Le 26/03/2014 19:24, Jason Orendorff a écrit :

 use strict;
 function Pony() {}
 Object.freeze(Object.prototype);
 Pony.prototype.toString = function () { return Pony; };

The last line here throws a TypeError in ES5 and ES6.*  Can we change
it? To me, it stands to reason that you should be able to freeze
Object.prototype and not break your other code, as long as that code
doesn't actually try to modify Object.prototype.

It looks like the override mistake.
http://wiki.ecmascript.org/doku.php?id=strawman:fixing_override_mistake
Mark Miller agrees with you. I agree with you.
The consensus is apparently that it is the desired behavior.
Threads on the topic:
https://mail.mozilla.org/pipermail/es-discuss/2012-January/019562.html
https://mail.mozilla.org/pipermail/es-discuss/2013-March/029414.html
(there might be meeting notes on this topic too)


This bit some Mozilla hackers in http://bugzil.la/980752.

Compatibility: Changing from throwing to not-throwing is usually ok.
In addition, I don't think Chrome implements this TypeError.
I can observe it does in Chrome 33. (the REPL doesn't consider the use 
strict; wrap in an IIFE to see the error being thrown)


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Iterator current/prev value

2014-03-23 Thread David Bruant

Le 23/03/2014 19:24, Brendan Eich a écrit :

Marcus Stade wrote:
This is assuming that the `current` or `prev` property is indeed 
implemented by the engine and not user land, as that indeed both 
carries implementation cost and the risk out running out of sync. Is 
there any way other than generator functions to implement iterators? 
Are any ol' object with a function called `next` an iterator?


Any old object. It's a structural or duck-typed protocol.

Longer form at :
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/The_Iterator_protocol 
(reviews welcome)


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES6 iteration over object values

2014-03-16 Thread David Bruant

Le 16/03/2014 00:45, Rick Waldron a écrit :


On Sat, Mar 15, 2014 at 7:38 PM, Jason Orendorff 
jason.orendo...@gmail.com mailto:jason.orendo...@gmail.com wrote:


On Sat, Mar 15, 2014 at 5:19 PM, David Bruant bruan...@gmail.com
mailto:bruan...@gmail.com wrote:
 Even if error prone, I'd be interested to hear about arguments
in the sense
 that the risk outweighs the benefits. Iterable-by-default
objects is a nice
 battery included feature.

I'm pretty sure es-discuss has been over this, but it doesn't hurt
to restate:

1. This would mean that evolving any object in any API from *not*
having an @@iterator method to providing its own @@iterator method
would be a backward compatibility risk. Existing code might be using
the default @@iterator to enumerate the object's properties.

2. The default Object.prototype.@@iterator would not appear on
Object.create(null), so the one kind of object people would most want
to have this behavior (Objects specifically created for use as
dictionaries) would be the only kind of object that wouldn't have it.
A separate function would be better---you could apply it to anything
with properties.

Either reason alone would be enough, but to me #1 is a killer.
Platform evolution hazards are bad news. You get stuff like
Array.prototype.values being backed out of browsers, and then you get
stuff like @@unscopables.

I'd like to see an Object.entries method, and Object.values for
completeness. Same visibility rules as Object.keys.

  for (let [k, v] of Object.entries(myObj)) {
  // do something with k and v
  }


Very enthusiastically agree---these would be excellent additions that 
balance nicely with Dict (null __proto_ b/w keys, values, entries), 
along with all of the builts-ins that received keys, values and 
entries on their prototypes in ES6.

Alright, convinced too.

Thanks :-)

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES6 iteration over object values

2014-03-15 Thread David Bruant

Le 15/03/2014 01:32, Brandon Benvie a écrit :

On 3/14/2014 5:16 PM, Mark Volkmann wrote:

Does ES6 add any new ways to iterate over the values in an object?
I've done a lot of searching, but haven't seen anything.
I'm wondering if there is something more elegant than this:

Object.keys(myObj).forEach(function (key) {
  let obj = myObj[key];
  // do something with obj
});


Not built in, but ES6 does provide a better story for this using 
generators and for-of:


```js
// using a generator function
function* entries(obj) {
  for (let key of Object.keys(obj)) {
yield [key, obj[key]];
  }
}

// an alternative version using a generator expression
function entries(obj) {
  return (for (key of Object.keys(obj)) [key, obj[key]]);
}

for (let [key, value] of entries(myObj)) {
  // do something with key|value
}
```
Currently, there is no default Object.prototype.@@iterator, so 
for-of'ing over an object throws a TypeError which isn't really a useful 
default.
No having a default @@iterator also makes that Map({a:1, b:2}) throws 
which is unfortunate.


Should what you just wrote be made the default 
Object.prototype.@@iterator? It is compatible with the signature the Map 
constructor expects too.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES6 iteration over object values

2014-03-15 Thread David Bruant

Le 15/03/2014 22:51, C. Scott Ananian a écrit :


It would be nicer to add an Object.entries() method that would return 
that iterator.



Object.prototype.entries or Object.entries(obj)?

That would be less error prone than adding a default iterator to every 
object.


The world has survived for-in and its weirdo unchangeable 
enumerable+proto-climbing rules and that was error prone.
Now we can control enumerability of things that are added to the 
prototype and the proposed default-but-still-overridable semantics is to 
iterate only over own properties. It's less clear to me that the 
proposed semantics is error prone.


The world has also evolved to a point where tooling can be written to 
warn about non-overridden @@iterable property for a given class (I 
feel like it is something TypeScript could do at least).


Even if error prone, I'd be interested to hear about arguments in the 
sense that the risk outweighs the benefits. Iterable-by-default objects 
is a nice battery included feature.


David


  --scott

On Mar 15, 2014 7:42 AM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:


Le 15/03/2014 01:32, Brandon Benvie a écrit :

On 3/14/2014 5:16 PM, Mark Volkmann wrote:

Does ES6 add any new ways to iterate over the values in an
object?
I've done a lot of searching, but haven't seen anything.
I'm wondering if there is something more elegant than this:

Object.keys(myObj).forEach(function (key) {
  let obj = myObj[key];
  // do something with obj
});


Not built in, but ES6 does provide a better story for this
using generators and for-of:

```js
// using a generator function
function* entries(obj) {
  for (let key of Object.keys(obj)) {
yield [key, obj[key]];
  }
}

// an alternative version using a generator expression
function entries(obj) {
  return (for (key of Object.keys(obj)) [key, obj[key]]);
}

for (let [key, value] of entries(myObj)) {
  // do something with key|value
}
```

Currently, there is no default Object.prototype.@@iterator, so
for-of'ing over an object throws a TypeError which isn't really a
useful default.
No having a default @@iterator also makes that Map({a:1, b:2})
throws which is unfortunate.

Should what you just wrote be made the default
Object.prototype.@@iterator? It is compatible with the signature
the Map constructor expects too.

David
___
es-discuss mailing list
es-discuss@mozilla.org mailto:es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Enriched Descriptors, maybe ES7 ?

2014-03-10 Thread David Bruant

Le 10/03/2014 08:02, Tom Van Cutsem a écrit :
Using Firefox's built-in direct proxies implementation I get a 
TypeError. I'll investigate further and file a bug.

You already did https://bugzilla.mozilla.org/show_bug.cgi?id=601379 ;-)

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype.contains

2014-03-05 Thread David Bruant

Le 05/03/2014 09:24, Eric Elliott a écrit :
What ever happened to Array.prototype.contains? There's an old 
strawman for Array.prototype.has ( 
http://wiki.ecmascript.org/doku.php?id=strawman:array.prototype.has ) 
that references this thread: ( 
https://mail.mozilla.org/pipermail/es-discuss/2012-February/020745.html )

Let's try to add it to the next meeting agenda
https://github.com/tc39/agendas/pull/27

But it seems the thread fizzled out a couple years ago, and 
Array.prototype.contains didn't seem to make its way into ES6. That 
seems odd, since we do have String.prototype.contains, and it seemed 
like it was desirable for DOM.

The DOM won't inherit from it directly, shall it?



It's also a standard utility function in several libraries.

Was it left out on purpose? If so, what was the justification?

I predict code like this without it:

''.contains.call([1,2,3],2);// true

.indexOf === -1 works today for this use case and will continue to.
I'd be happy to see !~arr.indexOf(el) disappear in favor of a use of 
.contains() though.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Final iterator spec

2014-03-03 Thread David Bruant

Le 03/03/2014 10:11, Andy Wingo a écrit :

On Sun 02 Mar 2014 04:18, Domenic Denicola dome...@domenicdenicola.com writes:


You can just do `if (Symbol.iterator in potentialIterable)`.

Of course, this can introduce time-of-check-to-time-of-use bugs.
Actually calling @@iterator on the iterable is more reliable.
This only shifts the problem one step without really solving it. Calling 
@@iterator may return a non-iterator or may return something that looks 
like a iterator ('next' method), but throws when calling 'next'.
I wonder if time-of-check-to-time-of-use bugs can be fully avoided 
entirely in JS?
It might be possible to guarantee some properties in TypeScript assuming 
all consummers of a piece of code are checked by the TypeScript compiler.


In practice, it looks like JS devs have lived well with solution like 
Domenic's one.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Fwd: .next('yo') in newborn generators

2014-02-20 Thread David Bruant

Le 20/02/2014 06:39, Brendan Eich a écrit :

Bradley Meck wrote:


If I am reading the spec right (and I may not be), only the generator 
should fail? The first call to gen().next(value) must have value be 
undefined, and the others do not check.


I thought we agreed at the January 28 meeting to get rid of this 
error, but I can't find it in the notes. The January meeting notes 
have missed other conclusions, though. Allen?

https://github.com/rwaldron/tc39-notes/blob/master/es6/2014-01/jan-28.md#concensusresolution
BN: Have to go back and think more about this. Maybe a helper function 
can be created.

It looks like no ferm decision has been made yet.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: can delegating yield be as fast as a normal function call?

2014-02-20 Thread David Bruant

Le 20/02/2014 15:03, Andy Wingo a écrit :

Hi,

This isn't really an es-discuss topic, as it is about performance of
implementations rather than the language itself.
Stating my own opinion only on behalf of myself: I think this thread 
appropriate for es-discuss. How developers use the language and what 
they would expect from implementations does inform how the language 
may/should/could evolve.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Maps with object keys

2014-02-17 Thread David Bruant

Le 17/02/2014 22:55, Benjamin (Inglor) Gruenbaum a écrit :
My issue here is that I want to index on complex values. I was under 
the impression ES6 maps solve amongst others the problem that with 
objects - keys are only strings.
With maps, all native types (string, number, boolean, undefined, null, 
object) can be keys.
For complex values, funnel your values down to one of these (by hashing 
or serializing or whatever fits your need). It's easy enough to write 
and enough use case specific to justify not being part of the language.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Status of Array.prototype.contains

2014-02-17 Thread David Bruant

Hi,

In the latest draft, I see String.prototype.contains, but no 
Array.prototype.contains


I see something about a no-brainer here 
https://mail.mozilla.org/pipermail/es-discuss/2011-December/019108.html

I haven't found a bug on bugs.ecmascript or a mention in the meeting notes.
Or was it superseded by .find?

... or is introducing it risking to break half of the web?

Might be useful to put an end to the ~arr.indexOf tricks [1]

David

[1] 
https://mxr.mozilla.org/mozilla-central/source/addon-sdk/source/lib/sdk/event/core.js#45

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promise.cast and Promise.resolve

2014-02-07 Thread David Bruant

Le 07/02/2014 22:05, Brendan Eich a écrit :

Kevin Smith wrote:
- A *working* implementation should be created and solutions to 
real-world use cases should be programmed using the design before any 
spec language is authored.  Spec-language is a poor medium for 
communicating both design intent and programming intent.


Yes, this.

A working implementation is a lot of work, even a polyfill. But tests.
Very recent case in point : 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=20701
It was a lot of words in English, lots of HTML5 spec vocabulary with 
very special and detailed meaning, I had lost track at some point, even 
with the spec-y summary by Bobby [1]. But then, he created tests and 
that was suddenly fairly easy to review [2]. It was fairly easy to point 
places that might be under-spec'ed and needed more tests.
Tests are an excellent medium to discuss feature design. The current 
test suite leaves room for interpretation on a corner case? throw in a 
new test to disambiguate!


On a side note:  it seems to me the the existence of the design 
champion, who by definition is deeply invested in the design process, 
implies the existence of its dual:  the anti-champion, who is 
detached from the details of the design work and provides a vital 
holistic perspective.


Yes, the _advocatus diaboli_. We have plenty of those, though. Too 
many, at this point.

woopsy...

David

[1] https://etherpad.mozilla.org/html5-cross-origin-objects
[2] https://www.w3.org/Bugs/Public/show_bug.cgi?id=20701#c133
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: detecting JS language mode for tools

2014-01-27 Thread David Bruant

Le 27/01/2014 06:45, Brendan Eich a écrit :

Kevin Smith wrote:



Is a new attribute necessary? What about using @type?


Old browsers will ignore unknown types, losing the two-way
fallback option.


Two-way fallback?  Why is that important?  Since modules are 
implicitly strict, there is little intersection between scripts and 
modules.


One can write strict code that runs fine in old browsers!
Yes. For transition from non-strict to strict and advice on writing 
strictness-neutral code, there is 
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions_and_function_scope/Strict_mode/Transitioning_to_strict_mode?redirectlocale=en-USredirectslug=JavaScript%2FReference%2FFunctions_and_function_scope%2FStrict_mode%2FTransitioning_to_strict_mode

(reviews welcome)

Why do we want inline module-bodied elements in HTML? That's the topic 
here.
Indeed. I'm wondering why we need inline script for modules.  
Historically [1], the good practice regarding inline script was to put 
them either in head or before /body (the rest of the scripts can 
load after DOMContentLoaded/load or on demand).
I imagine modules are intended to be reusable, stateless, 
timing-independent pieces of code. If, for perf reasons, we do need JS 
to be in the page alongside the HTML, we don't need it to run right away.


I feel that without too much work, we can have best of all worlds.
Module code could be sent along the HTML inlined, but with an 
unrecognized @type (and a class like module), so that it runs in 
neither old or new browsers. At a time decided by the author, the author 
can do:


var scripts = document.querySelectorAll('script.module');
if(es6modulesSupported){
[].forEach.call(scripts, function(s){ 
loader.load(s.textContent) });

}
else{
[].forEach.call(scripts, function(s){ (1, eval)(s.textContent)) };
}

(I'm not sure about the edges, but you get the idea)

We get the network perf benefits of sending the modules over the wire. 
The only way it differs with inline scripts is the scheduling, but I 
wonder how often it'll be important to load modules before DOMContentLoaded.


David

[1] 
http://www.youtube.com/watch?feature=player_detailpagev=li4Y0E_x8zE#t=1537

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: detecting JS language mode for tools

2014-01-27 Thread David Bruant

Le 27/01/2014 19:41, David Herman a écrit :

On Jan 27, 2014, at 2:07 AM, David Bruant bruan...@gmail.com wrote:


Indeed. I'm wondering why we need inline script for modules.

Because people write inline scripts all the time. It's unacceptably 
inconvenient not to be able to bootstrap your app with inline code. It also 
allows you to control for when the scripts resource is there, in particular to 
be sure that necessary bootstrapping/kernel code has loaded before you need to 
do some wiring up of your app.
Agreed. Note that I didn't suggest to stop writing inline scripts and 
proposed an alternative to script@module that can work today.
Granted, it's somewhat hacky, but I think it can work during the period 
during which there'll be both ES6 and non-ES6 browsers to support.


I was sloppy in my phrasing. What we don't need is the current inline 
script execute right now and block everything else semantics, 
specifically for modules which order of execution shouldn't block things.



But it's not even worth overthinking. It's so obviously, obscenely anti-usable 
not to be able to write

 script module
 import $ from jquery;
 import go from myapp;
 $(go);
 /script

inline that I'm surprised this is even a discussion.
If the snippet is only targetting ES6 browser, it can work without the 
module attribute (I think?). This snippet doesn't work on non-ES6 
browsers, though.


I feel two different problems are being discussed in this thread? One 
about inline modules, one about compatibility, (both a bit away from the 
original topic ;-)). I was on the compatibility track.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: detecting JS language mode for tools

2014-01-24 Thread David Bruant

Le 24/01/2014 18:26, John Lenz a écrit :



REPL is a dilemma: if you parse as module, then obtaining the last
expression value is not simple. if you parse as a script, then
common cut/paste fails on export/import statements.


My basic question remains.  As a tool owner how do I know if what I'm 
looking at is intended to be a Module or a Script?

How do you know if some code is intended for the browser or Node?
How do you know some code is intended to be used in a WebWorker and not 
in the main thread?
How do you know the code won't be concatenated a use strict when 
someone else uses it?


The code itself lacks the context in which it's being loaded (hence very 
defensive patterns like UMD (Universal Module Definition)).
If you want to be exhaustive, you'll have to make an assumption or make 
your tool smarter about the context.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Standard modules?

2014-01-21 Thread David Bruant

Le 20/01/2014 23:16, Kevin Reid a écrit :
SES needs to visit every 'primordial' / 'singleton' object to ensure 
they're made immutable and harmless. (Other 'meta' code might also 
benefit though I don't know of any examples offhand.)


This job is easier if all such objects are reachable via traversing 
data properties.


ES5 contains only one object which this is not true of:
Beware, I've heard that the browser contains many more of these objects. 
See discussion starting at 
https://bugzilla.mozilla.org/show_bug.cgi?id=900034#c4
In a nutshell, WebIDL defines the NoInterfaceObject which, when 
reified in ECMAScript means that a prototype object exists, but it can't 
be found via Interface.prototype (since Interface is not defined as 
a global). I imagine the only way to find these is create an instance 
and the Object.getPrototypeOf. It's apparently used in WebGL sometimes.
I imagine there is a complete repository of WebIDL files somewhere 
(Moz/Blink codebase, maybe W3C, maybe alongside the WebGL spec) you can 
use to list all of these interfaces. How to create the different 
instances is another story.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Standard modules?

2014-01-20 Thread David Bruant

Le 20/01/2014 18:39, Brendan Eich a écrit :

Allen Wirfs-Brock wrote:
It isn't clear that there much need for a global name for 
GeneratorFunction.  If you really eed to access it can always get it 
via:


   (function *() {}).constructor

Do we even need (function *() {}).constructor !== Function?
(and [[FunctionKind]] generator and a different @@toStringTag and...)
What is its use case anyway? Creating a generator from source?
What's wrong with:
eval(function*(x, y, z, ...yo){/*body*/})
(and when the source isn't trusted, use indirect eval or soon enough the 
module loader)


Does this present a hazard for CSP, which provides policy controls 
governing Function?
It introduces something that probably should be disabled by default and 
re-enabled only if the unsafe-eval origin is present.
From a security perspective, note that this is a marginal 
(non-existent) protection and the underlying capability (executing 
arbitrary code) remains since an attacker can download a JS interpreter 
to eval any string itself.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


.next('yo') in newborn generators

2014-01-15 Thread David Bruant

Hi,

Playing with the test cases of the regenerator project [1], I came 
across a case and was wondering what the intention of the spec is given 
that Firefox and Chrome recent implementations diverge.

Apologies for not reading all the previous discussions on this edge case.

Test case:
js
function *gen(x) {
yield x;
}

var g = gen('whatever');
console.log(g.next(0));


Chrome  regenerator:
{value: whatever, done: false}

Firefox (Aurora 28):
TypeError: attempt to send 0 to newborn generator

From what I understand, the spec says an error should be thrown because 
the generator is in suspendedStart state and value is not undefined 
(25.3.3.2 GeneratorResume step 7).

Where do I file bugs?

David

[1] https://github.com/facebook/regenerator/blob/master/test/tests.es6.js
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: transpiling ES6 generator functions to ES5: what next?

2014-01-14 Thread David Bruant

Hi Ben,

Sorry for the very late response.
This is quite an interesting work, thanks for sharing!
I'm particularly interested in your test suite [1] which is impressive.

This is making me realize that generators are fully compilable 
(efficiently from what I can see) into ES5 and makes me wonder if the 
current generators specificities are worth it. Very specifically, do we 
really need Generator.prototype [ @@toStringTag ] === Generator ?
From an author point of view, I don't really see in which situation 
this information could matter. As a comparison, functions generated 
after the class syntax do not have an @@toStringTag to Class.

Generators would just be sugar to write iterators (+ .throw)


Le 03/11/2013 21:55, Ben Newman a écrit :


  * Given that this tool will become obsolete as more and more engines
implement ES6 generator functions, how can we maximize its value
in the meantime? Are there grey areas in the draft spec that can
be illuminated? Should I spend my time implementing (or getting
others to implement) await syntax and/or control-flow libraries
that leverage generator syntax?

You can most certainly experiment with await syntax and share what 
you've learned.
Are there any test cases that you've written and you feel like the 
expected spec behavior is odd or unintuitive in some aspect?



  * How would you design a system that selectively delivers transpiled
code to ES5-capable browsers and native generator code to
ES6-capable browsers, so that end users will benefit immediately
when they upgrade to a browser with native support for generators?

Since there is no semantic difference between the ES6 and your compiled 
version, it's unlikely the users will see a difference at all (not even 
sure the perf is that much different).


But if you really want to try there are different options with 
different  downsides.
1) Server-side UA sniffing. You get the User-Agent header, infer which 
browser it is and decide which version you should be sending. Send the 
ES5 version when you don't know the UA (safe default)


Downsides:
* if a browser changes its header, you may be sending the wrong version. 
This is a problem when you're sending the ES6 version to a non-ES6 
browser (which admittedly should be a very rare case)
* You need to update the list of ES6 User-Agent strings as new browsers 
arrive


2) Send a feature-detection JS snippet on the client which will decide 
which version to load.


Downside:
* having to wait until this snippet is executed to start code download 
(or one extra round-trip if code was originally inlined)


3) send compiler to the client-side

Downside:
* more code

Personally, I'd go for sending the ES5 version to everyone. My second 
choice would be 1), but I guess it depends on the requirements.


David

[1] https://github.com/facebook/regenerator/blob/master/test/tests.es6.js
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Enumerability

2014-01-11 Thread David Bruant

Le 11/01/2014 18:03, Brendan Eich a écrit :

Axel Rauschmayer wrote:
I know this runs counter the conventional wisdom for specs, but I 
find design rationales incredibly important for making sense of 
what’s going on: The answers and discussions on this mailing list 
were essential in helping me understand the language.


+1.

+2

Case in point: Allen point a couple of messages ago that for-in is 
effectively rendered useless in ES6 replaced by for-of.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Clarifications on the iterator protocol

2014-01-11 Thread David Bruant

Hi,

I'm starting a documentation on the iterator protocol and wanted to ask 
a few things just to be 100% sure, because some things may leave room to 
ambiguities.


## Just for confirmation

First, on the relevant TC39 meeting notes [1]. It is suggested that 
Without Brendan, a champion of iterators and generators, don't have 
full consensus. Later notes don't come back to this, so I imagine 
Brendan agrees (upon confirmation, I'll PR the meeting notes to reflect 
this for future readers).



## Iterator protocol next signature.

The meeting notes suggest the following signature for next:
next: () - {done: boolean, value?: any}
(it's not clear if it's the iterator protocol or generator.next signature)

However, in the current draft, the IteratorNext operation takes a value 
argument as passed it to the call to .next.
Also, although ES6 will not make use of that, it's possible for 
user-generated iterators to accept any number of arguments.
Also, the IteratorComplete seems to survive if there is no done 
property returned (interpreted as done: false obviously).
In the end, it looks like the broader signature of user-created 
iterators is something like:

next: (value?: any, ...extraArgs) - {done?: boolean, value?:any}

The language will never make use of the extra arguments, but 
user-defined sub-protocols might. The generator protocol does use the 
first argument if the generator body does.



 Always has an own property called value

To answer my above question, it looks like the .next signature agreed 
upon is neither the iterator protocol (which seems effectively broader), 
not the generator one, which as quoted always have an own value property.


David

[1] 
https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-03/mar-12.md#conclusionresolution-1

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES6 problem with private name objects syntax

2014-01-08 Thread David Bruant

Hi Maciej,

Le 08/01/2014 09:59, Maciej Jaros a écrit :

To my understanding private name objects

Note that now their name is symbol and not private name anymore.

are supposed to make private properties and functions available for 
new classes syntax in ECMAScript 6 standard.
A private keyword will be introduced in ES7. There is still 
disagreement on the specifics of the semantics.



But the syntax is rather strange:
```
var myPrivate = new Name();
class Test {
 constructor(foo) {
  this[myPrivate] = foo;
 }
}
```

I understand the motivation - using just `this[myPrivate]` wouldn't 
work because it could be inconsisten when `myPrivate` is a string 
variable. If `myPrivate='abc'` then `this[myPrivate]` is equivalent 
`this.abc`... So that is the main reason Name objects were born, right?
For the sake of making your code easier to write, read and understand, 
you wouldn't reassign myPrivate, preferably even declare it with 
const instead of var.


BUT what is the point of having this new syntax if I need to predefine 
all private variables (also the ones used for methods)?
It's a temporary setup. A private keyword will be introduce and 
provide runtime-level privacy (as opposed to source-level privacy like 
TypeScript does)


Instead of above I could just use (shorter, more intuitive, already 
works):

```
var myPrivate;
class Test {
 constructor(foo) {
  myPrivate = foo;
 }
}
```

I could also secure the scope which would still be shorter for more 
then one variable:

```
(function(){
var myPrivate, myPrivate2;
class Test {
 constructor(foo) {
  myPrivate = foo;
  myPrivate2 = foo.toString();
 }
}
})()
```
Does it really already works? It looks to me like your private variables 
are shared by all instances (while private in class is supposed to be 
per instance). When the constructor is called twice, only the last 
values will remain.


I'm probably missing some optimization points but I was unable to find 
them on ES Wiki.

I believe the wiki is outdated. [1] has a message at the top saying:
This proposal has progressed to the Draft ECMAScript 6 Specification, 
which is available for review here: specification_drafts. Any new issues 
relating to them should be filed as bugs at http://bugs.ecmascript.org. 
The content on this page is for historic record only and may no longer 
reflect the current state of the feature described within.



The only new thing is the Name object. I see no use case for it and it 
doesn't seem to be more readable then current solution.
Per-instance runtime-level privacy. JavaScript lacks this badly. Note 
that I didn't mention classes. We need privacy even beyond the context 
of classes.

It's possible to achieve it with WeakMap with something along the lines of:

js
var privateState = new WeakMap();
function createPrivateState(o){
privateState.set(o, {});
}
function _private(o){ // private is a reserved keyword
return privateState.get(o);
}

class C{
constructor(yo){
createPrivateState(this);

_private(this).yo = yo;
}

getYoPlusTwo(){
return _private(this).yo+2;
}
}



But performance are certainly attrocious because of the WeakMap lookup 
by comparison to what it could be if the property was a string or symbol.


What I'm saying is - please consider dropping `Name` objects and use 
some new syntax (e.g. `this#variable`) to avoid clashes but make 
declarations more readable for humans.
I can't speak for TC39, but from what I see and read, they're not going 
to drop symbols.
Note that regarding privacy, there is the relationship strawman on the 
table IIRC

http://wiki.ecmascript.org/doku.php?id=strawman:relationships
I've lost track of what the state of that is.

Last thing I have found [2]:
Sam, Mark and Allen to work on relationships and varied 
representation in ES6.


David

[1] 
http://wiki.ecmascript.org/doku.php?id=strawman:maximally_minimal_classes
[2] 
https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-03/mar-13.md#conclusionresolution-1

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Additional Set.prototype methods

2013-12-31 Thread David Bruant

Hi,

I've been playing with Sets recently and believe that the following 
additions would make them more useful by default:

* Set.prototype.map
* Set.prototype.filter
* Set.prototype.toJSON = function(){
return [...this];
};

The 2 first are to easily create sets from existing sets very much like 
what we already have with arrays. I haven't had a use for a .reduce yet, 
but maybe that would make sense too?
The toJSON is just to provide a good default. Obviously anyone 
disatisfied with it can shadow it on specific instances. But this 
serialization makes more sense by default than the one you get now (own 
properties of the set object... which have none in common usages?)


Hopefully both IE11 and Firefox having shipped Sets without this toJSON 
behavior won't prevent this change?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Additional Set.prototype methods

2013-12-31 Thread David Bruant

Le 01/01/2014 00:34, Brandon Benvie a écrit :

How about Maps?
Sets and arrays are very much alike in that they are collections of 
items. Maps are more like objects. I'd expect maps to have methods like 
the one we apply on object (Object.keys, etc.), but I think everything 
is covered.

I guess a default toJSON could help Maps too.


And since their order is deterministic, how about the rest of the array extras?
I don't have a strong opinion either way. I've noticed I wanted the one 
I listed, but felt like being conservative for a first shot of suggestions.


I feel there might be room for some static methods too like Set.union 
and Set.intersection. These are annoying with arrays (while keeping 
uniqueness of elements)


David





On Dec 31, 2013, at 11:36 AM, David Bruant bruan...@gmail.com wrote:

Hi,

I've been playing with Sets recently and believe that the following additions 
would make them more useful by default:
* Set.prototype.map
* Set.prototype.filter
* Set.prototype.toJSON = function(){
return [...this];
};

The 2 first are to easily create sets from existing sets very much like what we 
already have with arrays. I haven't had a use for a .reduce yet, but maybe that 
would make sense too?
The toJSON is just to provide a good default. Obviously anyone disatisfied with 
it can shadow it on specific instances. But this serialization makes more sense 
by default than the one you get now (own properties of the set object... which 
have none in common usages?)

Hopefully both IE11 and Firefox having shipped Sets without this toJSON 
behavior won't prevent this change?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Additional Set.prototype methods

2013-12-31 Thread David Bruant

Le 31/12/2013 20:52, Calvin Metcalf a écrit :
I had the same idea a couple weeks ago and turned it into a library 
https://github.com/calvinmetcalf/set.up if anyone finds it useful.
hmm... It is useful, but not future-proof. If methods with these names 
ever get standardized, your code will override them. If other code wants 
to use the standard one and there is the least semantic deviation 
between your library and the standard, this other code will break in 
subtle ways.


I'd recommand prefixing every non-standard addition with _ as in:

Set.prototype._filter = function(func, context){
...
}

This way, if a standard filter method arrives, your code will use 
_filter, other code will use the standard filter unambiguously and 
no code will break.
This additional _ guarantees non-collision with future standard methods. 
I suggested this some time ago and no browser vendor nor standard folks 
complained too hard for this not to work, so I guess it can be declared 
author territory. The flag is up.
Note: this _prefixing trick can also work for Array.prototype and 
String.prototype and EventTarget.prototype.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Overly complicated Array.from?

2013-12-29 Thread David Bruant

Le 29/12/2013 14:42, Brendan Eich a écrit :

David Bruant wrote:

Le 29/12/2013 01:48, Brendan Eich a écrit :

David Bruant wrote:
it's somewhat ironic that Array carries 'from' given it's the only 
class that doesn't need it per case study for 3) above :-)


But Array is the return type.
It's always the return type of Array.from(x), but not the return type 
of Array.from.call(Whatever, x).


Of course, but why is this a problem for the name? Collection.from for 
Collection extends Array carries the same connotation.


Let's stick to real problems! The name is not a problem, AFAICT.
The part you answered to was a sidenote in my original message :-/ I 
never meant to question the name.


Back to the real problem.
As a summary, Rick explained that the problem to solve was ES6 code 
using ES3/5 codebases and the need to easily turn the arraylikes the 
latter define to real iterable for use in the former.
It is an author's problem today. By that I mean that it's not a language 
expressivity problem (like the one WeakMap, Proxies or Symbols solve) 
and it is a temporary problem (granted, the transition period may last 
10 years, but it's still temporary).


I believe this problem should be solved by authors via libraries and/or 
tooling and that the language should not carry a scar of a transitional 
problem. As a matter of fact, the library to solve the transitional 
problem is 20 (!) straightforward lines of code and already exists [1] 
(and with Rick's blessing, it'll be open source), so I don't believe 
assistance from the language is needed.


The costs of supporting arraylikes aren't big (zero in runtime?) though. 
Eventually, this part will just be dead code. Not optimal, but not a big 
deal.


Ok, thanks everyone :-)

David

[1] https://gist.github.com/rwaldron/1074126#file-array-goodies-js-L15-L36
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Overly complicated Array.from?

2013-12-28 Thread David Bruant

Le 27/12/2013 19:10, Claude Pache a écrit :
There is still the issue of potential libraries that produce 
arraylikes that don't inherit from a built-in arraylike prototype: 
those won't benefit from your polyfill without changing their 
inheritance strategy.
I don't understand the expression inherit from a built-in arraylike 
prototype. Could you explain this further?



(I don't know whether it's a common issue.)

I think any use case involving libraries can be solved.
In an ES6 world, a library would make Array.from work via setting an 
appropriate @@iterator on the objects it generates.
Based on what I suggested (internalize iterators in Array.from code for 
polyfills), the ES5 equivalent is to override Array.from as such:


js
(function(){
var nativeArrayFrom = Array.from;

Array.from = function(x){
if(/*x is of my library type*/){
/* generate an equivalent array using the traversal logic
that would be used for its @@iterator in ES6
*/
}
else{
return nativeArrayFrom(x);
}
}

})()


Granted, it's not super elegant solution, but it does work. The overhead 
becomes significant only in the degenerate cases where dozens of 
libraries override Array.from.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Overly complicated Array.from?

2013-12-28 Thread David Bruant

Le 28/12/2013 15:25, Brendan Eich a écrit :

This seems overcomplicated. Isn't the likelier code something like

  Array.from || (Array.from = function(b) { var a=[]; for (var i=0; 
ib.length; i++) a.push(b[i]); return a; });


Isn't the whole point to impute arraylikeness to the parameter?
In any case the important point is that it's possible to implement in an 
ES5 env whatever behavior is expected from Array.from in an ES6 env.


Granted, it's not super elegant solution, but it does work. The 
overhead becomes significant only in the degenerate cases where 
dozens of libraries override Array.from.


David, I took your side in the TC39 meeting, as the meeting notes 
disclosed. Rick prevailed (I think, my memory is hazy).
It's what I read from the notes too, but I feel something may have been 
overlooked.


You want the polyfillers to pay the price, while Rick proposes that 
ES6's built-in absorb arraylike fallback handling.


The difference is not in the polyfill (old browser) case, but in the 
present and future (ES6 and above) cases: some objects will remain 
arraylike yet lack @@iterator.
In ES6 and above, why would one create such an object? What's a good use 
case?
My understanding of the current consensus is that an arraylike without 
@@iterator wouldn't work for for-of loops nor spread. Why not just 
create an array? jQuery and Zepto want to subclass Array (one creates 
arraylike, the other does subclass setting __proto__). It wasn't 
possible in ES5, but is in ES6 with classes (and the super+@@create 
infrastructure).


I feel that all the cases that justified arraylikes in the past have 
much better alternatives in ES6.
My little experience building a Firefox addon even suggests that sets 
replace arrays in most situations as most of what I do with arrays is 
.push, for-of and .map/filter/reduce (by the way, Set.prototype needs 
these too, but another topic for another time).



Why shouldn't Array.from help them out?
If these objects have a good reason to exist in an ES6 and above world, 
I agree, that's a good point. But is there a use case justifying their 
existence?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Overly complicated Array.from?

2013-12-28 Thread David Bruant

Le 29/12/2013 00:11, Rick Waldron a écrit :
On Sat, Dec 28, 2013 at 5:44 PM, Domenic Denicola 
dome...@domenicdenicola.com mailto:dome...@domenicdenicola.com wrote:


I believe that Array.from's only purpose is to provide guidance
for polyfills for people to use in ES3/ES5 code; nobody writing
ES6 would ever use it.


Ignoring any of the previous benefits I've discussed, it seems you're 
forgetting about the map function feature of Array.from?
Ok, so to re-focus for those following at home, there are 3 cases to 
consider for authors:

1) code only aiming at ES3/5
2) code aiming at both ES3/5 and ES6 environments
3) code only aiming at ES6 envs.

For 1), let's all keep doing what we've been doing. In ES3/5, there is 
not really a notion of iterable protocol as it's not used by the 
language as it is in ES6.
For 3), from an iterable to an array, it takes ``[...iterable]`` IIUC, 
no need for Array.from at all.
2) is the subtle case. There is only one code base. Because it needs to 
work in ES3/5, it can't use @@iterable (like jQuery as Rick stated). 
However, ES6 code may want to iterate over the arraylikes generated by 
the library with for-of and spread. This is where Array.from comes handy 
if it works both for @@iterables and arraylikes.
But an Array._from library could work equally well for this purpose. No 
need for the built-in Array.from to handle arraylikes.


Dominic Denicola:
I also forgot about how it would be useful for subclasses, e.g. 
Elements.from(nodeList), since subclasses don't have their own 
dedicated spread syntax. Withdrawn in full.
Oh... super true. Array.from or perhaps less confusingly just-from 
enables to convert any iterable to another iterable (assuming proper 
@@create setup). But that's an ES6-only use case and is unrelated to the 
arraylike handling I think.
Side note: it's somewhat ironic that Array carries 'from' given it's the 
only class that doesn't need it per case study for 3) above :-)


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Overly complicated Array.from?

2013-12-28 Thread David Bruant

Le 29/12/2013 01:48, Brendan Eich a écrit :

David Bruant wrote:
it's somewhat ironic that Array carries 'from' given it's the only 
class that doesn't need it per case study for 3) above :-)


But Array is the return type.
It's always the return type of Array.from(x), but not the return type of 
Array.from.call(Whatever, x). I called Array.from just-from in my 
previous message as an attempt to reduce the confusion.


In the latest draft, step 1 of Array.from is Let C be the this value.
There are different return points that all return 'A' and 'A' created at 
step 8.a.i as the result of C.[[Construct]]. And C.[[Construct]] doesn't 
have to return an Array.


That's at least my understanding of the current draft.

just-from is a function that turns the iterable passed as argument to 
an array-like. It'll be an Array for Array.from but whetever else for 
Whatever.from.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Overly complicated Array.from?

2013-12-26 Thread David Bruant

Le 26/12/2013 05:00, Rick Waldron a écrit :


On Wed, Dec 25, 2013 at 7:33 PM, David Bruant

For the rationale, the wiki states [1]:
There are many array-like objects in JS (arguments objects, DOM
NodeLists, arrays from separate windows, typed arrays) and no
simple way to convert one of these to an instance of Array in the
current window; this is the rationale behind a simple conversion
function (Array.from).

I think that if all of these objects had a good default
@@iterable, there wouldn't be a need for the array-like part of
Array.from.
The good default most likely being based on .length, etc.


The array-like part is for all of those objects that _won't_ have an 
@@iterator, for one reason or another
I must have missed these reasons. No @@iterator also means these objects 
cannot be iterated with via for-of loops by default and I can't think of 
a good reason for that for any of the listed (arguments, NodeList, 
arrays from different windows, typed arrays).
Do you have a link to previous discussions on this topic or a summary if 
that can be explained quickly?



and for useful shimming in ES5 runtimes.
Even if Array.from relies on @@iterator for runtimes with symbols, it 
doesn't prevent runtimes without symbols to embed the iterator logic in 
the Array.from source (which is exactly what your prolyfill does).


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Overly complicated Array.from?

2013-12-26 Thread David Bruant

Le 26/12/2013 10:58, David Bruant a écrit :

Le 26/12/2013 05:00, Rick Waldron a écrit :


On Wed, Dec 25, 2013 at 7:33 PM, David Bruant

For the rationale, the wiki states [1]:
There are many array-like objects in JS (arguments objects, DOM
NodeLists, arrays from separate windows, typed arrays) and no
simple way to convert one of these to an instance of Array in the
current window; this is the rationale behind a simple conversion
function (Array.from).

I think that if all of these objects had a good default
@@iterable, there wouldn't be a need for the array-like part of
Array.from.
The good default most likely being based on .length, etc.


The array-like part is for all of those objects that _won't_ have an 
@@iterator, for one reason or another
I must have missed these reasons. No @@iterator also means these 
objects cannot be iterated with via for-of loops by default and I 
can't think of a good reason for that for any of the listed 
(arguments, NodeList, arrays from different windows, typed arrays).
Do you have a link to previous discussions on this topic or a summary 
if that can be explained quickly?

Found the relevant part of the notes [1]:
BE: Let's not remain slaves to legacy, Array.from, for-of and spread 
use only iterable.


RW: What about pre ES6 environment?

BE: Can fall back to array-like if needs.

I guess this is where I differ as I don't see a need. In ES5 
environments, the default @@iterator can be embedded in Array.from, 
leading to something like (worst case for explanatory purposes):


js
Array.from = function(x){
if(/*x is a NodeList*/){
// polyfill default NodeList[@@iterator] behavior to create the 
array to return

}
if(/*x is an Arguments*/){
// polyfill default Arguments[@@iterator] behavior to create 
the array to return

}
// ...
}


Most likely all of these @@iterator polyfills are the same array-like 
traversals, so there shouldn't be a need to separate each case, they 
most likely all use the same logic.


Rick Waldron:
The array-like part is for all of those objects that _won't_ have an 
@@iterator, for one reason or another
The Conclusion/Resolution section of [1] suggests: Add iterator 
protocol to arguments object (should exist on all things.
I went quickly through all the meeting notes I could find and didn't 
find something about some objects not having an @@iterator.


David

[1] 
https://github.com/rwaldron/tc39-notes/blob/master/es6/2012-11/nov-29.md#revisit-nov-27-resolution-on-iterables-in-spread
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Overly complicated Array.from?

2013-12-25 Thread David Bruant

Hi,

I was reading the current spec for Array.from and it felt too 
complicated to me. Currently, at a high-level it reads like:
1) if the argument is iterable (@@iterable symbol), create a fresh array 
made of the values iterated on with the iterator
2) (step9) if the object is array-like, len = [[Get]] ('length') and 
from 0 to len-1, copy the values of the array-like in a fresh array to 
be returned.


Note that between the two parts, a good share of spec is duplicated.

For the rationale, the wiki states [1]:
There are many array-like objects in JS (arguments objects, DOM 
NodeLists, arrays from separate windows, typed arrays) and no simple way 
to convert one of these to an instance of Array in the current window; 
this is the rationale behind a simple conversion function (Array.from).


I think that if all of these objects had a good default @@iterable, 
there wouldn't be a need for the array-like part of Array.from.

The good default most likely being based on .length, etc.

David

[1] http://wiki.ecmascript.org/doku.php?id=strawman:array_extras
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Could delete methods rename to remove?

2013-12-18 Thread David Bruant

Le 17/12/2013 22:52, Alex Kocharin a écrit :
 I believe ecmascript isn't versionless yet like html is, and that 
number means something.
As far as I'm concerned, ECMAScript is versionless. As versionless as 
HTML. Implementation aren't monolithically moving from one standard 
version to the other. I don't believe we've ever seen a browser with 
exactly ES3 or exactly ES5 (wait... maybe IE10?! but with IE11, they're 
back to ES5+some ES6 features)
Modulo spec bugs and history details, version n is fully 
backward-compatible with version n-1.

TC39 decided to move to a more iterative spec release schedule recently too.
The version has also never been exposed to the runtime which encourages 
people to do version-agnostic feature detection.


Version numbers mean nothing. Version numbers are kept only for the same 
reason W3C produces HTML5 and 5.1 and 6 spec. And I think the reason 
is that most people aren't used to how the web works and are reassured 
with classic versioning systems... so reassured some people pay a 
different price to the same company when they're sold a HTML4 site or an 
HTML5 site (because 5  4, you know...). True story.


Maybe versions is just better marketing because in the next version 
suggests a stronger progress than in the next iteration?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Could delete methods rename to remove?

2013-12-17 Thread David Bruant

Le 17/12/2013 10:19, Shijun He a écrit :
There are some methods using reserved word delete, such as 
Map.prototype.delete, Set.prototype.delete... Though it is allowed 
since ES5, I think we'd better avoid it because it cause es6 shim 
solution fail on legacy browsers such as IE8.
Note that there is a warning [1] (maybe arguably). Among other 
incompatibilities, size is a getter too.


myMap.delete fails, but myMap['delete'] should work.

David

[1] 
https://github.com/paulmillr/es6-shim/blob/4322eae20b6f8a7769fa1d89ac207ef8ee9e1ee4/es6-shim.js#L662

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Name of WeakMap

2013-12-16 Thread David Bruant

Le 16/12/2013 22:42, Anne van Kesteren a écrit :

If you're unclear on the name, just make it clear in the specification
that the name is not stable and that therefore it cannot ship yet (you
could implement it and ship it in nightlies and such of course).

Or don't put it in the spec at all?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Name of WeakMap

2013-12-16 Thread David Bruant

Le 16/12/2013 22:51, Oliver Hunt a écrit :

(I know Anne knows this argument, but i’m emailing this logic for people who 
aren’t aware of it)

The reason for prefixing APIs is to allow a feature to be shipped and used by 
developers before the final api semantics are settled on.  In the event the 
spec doesn’t change then they simply alias, but if the spec does change it 
allows an engine to continue to maintain the old behaviour in the prefixed API 
and so not break any content that depends on the API.

Saying that you should never ship anything if it would need prefixing means 
that you can never see real examples of usage because no _real_ site is going 
to use a feature that isn’t available in actual shipping browsers.
This isn't true. Mozilla clearly intends to stop shipping prefixed 
features. Blink made this very clear too.


They both ship unprefixed APIs, but hidden behind a flag and/or in 
early versions (Canary and Aurora). This systems works well enough, even 
for real websites whatever you mean by real.
Parts of WebRTC are currently only shipped in early browser versions and 
that allows real people to experiment with it and send feedback before 
it's considered stable enough to reach the web.



If the API is not prefixed then once it ships and is used it can never be fixed 
under the same name (see localStorage being stuck with a string backing store). 
 That’s why prefixed APIs exist — it’s not so we can say the API is unstable, 
it’s so that the API can be changed once developers have started using 
preliminary versions.

In my opinion the cost of maintaining an old version of the API may be 
annoying, but is vastly outweighed by the ability to put features in the hands 
of authors without forcing the API to be stuck with it’s early draft semantics.
:-/ That's also how you end up with de facto standard of webkit prefixes 
in CSS and the aliasing Opera did before the Chromium-based days. It's 
likely that the CSS specification will eventually contain the -webkit- 
properties. This is an unnecessary scar.


How web features arrive in stable versions of browsers changed a lot 
since localStorage. I largely prefer a model without prefix and with 
early versions. Thanks to Google and Mozilla for their lead in this model!


David




—Oliver


On Dec 16, 2013, at 1:42 PM, Anne van Kesteren ann...@annevk.nl wrote:


On Mon, Dec 16, 2013 at 7:01 PM, Andrea Giammarchi
andrea.giammar...@gmail.com wrote:

We are all use to write abominations such `var url = window.webkitURL ||
windows.mozURL || windows.URL` and similar stuff all over, the reason
utilities libraries are often preferred, so while I am very happy that IE
team finally has been able to catch up and be even in front of other
browsers, I do believe that incomplete specifications or those still under
discussion should be adopted with prefixes until finalized in their form in
order to promote less mistakes in the long term.

That way you end up with e.g. having to support mozMatchesSelector()
forever in addition to matches(). We're certainly going to avoid doing
that in Gecko.

If you're unclear on the name, just make it clear in the specification
that the name is not stable and that therefore it cannot ship yet (you
could implement it and ship it in nightlies and such of course).


--
http://annevankesteren.nl/
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Using const to remove debug code? Is there something stopping implementers from doing this?

2013-11-28 Thread David Bruant

Le 28/11/2013 09:59, Brandon Andrews a écrit :

Lately I've been writing very processor heavy Javascript. I feel like it could 
benefit a lot from having a syntax feature for removing debug statements. 
Obviously JS is interpreted and not compiled, so I'm not sure if this sounds 
completely unrealistic, but it has some very useful scenarios.

I like to write verbose type checking for functions to check ranges and throw 
exceptions if invalid input is detected. The issue is in a production 
environment (especially with games) the code executes too slowly with all the 
extra branches. It would be nice if there was a simple syntax to treat code as 
if it's commented out when a flag is set.
Does this need to be part of JavaScript (and be implemented in web 
browsers)?
From what I understand, what you're describing is purely a development 
time concern and not a (production) runtime concern, so I feel the 
solution should be found in better development tooling.


Good news! Olov Lassus already worked on something like this!
http://blog.lassus.se/2011/03/c-style-assertions-in-javascript-via.html
https://www.youtube.com/watch?v=yk6t4kRN53w

I haven't looked at it too much, but it might be possible to do 
assertions (that run in dev, but not in prod) with Sweet.js [1] macros. 
Potentially that's something that could be part of TypeScript too (I 
haven't seen an issue on this topic or in the roadmap, but maybe that's 
an addition they'd be open to do?).


JavaScript isn't compiled, but we can build tools that do compile to JS 
without requiring support from the browser.


David

[1] http://sweetjs.org/
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: November 19, 2013 Meeting Notes

2013-11-27 Thread David Bruant

Le 27/11/2013 19:14, Rick Waldron a écrit :

# Nov 19 Meeting Notes

## 4.4 Finalizing the Proxy API for ES6
(Presented by Tom Van Cutsem)

(...)

DS: What is typeof and instanceof

AWB/BE: object

BE: Capital P

AWB: Ca???

DS: Whatever Proxy creates?

BE: That depends on what is created.

DS: By default?

BE: typeof depends if there is a call trap. instanceof depends on the 
prototype chain. All in the spec, so can create any object (apart from 
private state issues)
Shouldn't it depend on the target's typeof value? depending on apply 
(not call) trap makes typeof unstable (delete handler.apply).
In any case, extra caution is required to keep typeof stability for 
revokable proxies (on revocation, maybe the value need to be saved 
somewhere)


nit:
instanceof depends on the prototype chain
= Note that it calls the getPrototypeOf trap which doesn't enforce 
anything for extensible objects [1], so *the* prototype chain has a 
more volatile meaning for proxies than it has for regular objects.


David

[1] https://mail.mozilla.org/pipermail/es-discuss/2013-September/033370.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


ECMAScript error sink (was: Weak callbacks?)

2013-11-13 Thread David Bruant

Le 13/11/2013 06:15, Boris Zbarsky a écrit :

On 11/12/13 11:07 PM, David Bruant wrote:

I understand the need to know when a promise has an unhandled error at
development time, I'm less clear on why you need to know it at runtime.
Why would you do with this information? handle the error?


The same thing that sites (e.g. Facebook) do with window.onerror: 
phone home to the server about the bug so it can actually get fixed.
I'm sympathetic with this use case, but Weakrefs seem like the wrong 
tool to solve this problem. Wrapping every single promise in case one 
ended up failing in an unexpected way feels way too expensive. There 
should be a sort of error sink feature instead.


The browser has window.onerror for historical reasons, Node.js 
introduced Domains and Domain#intercept [1] for that reason IIUC.

Isn't it the sign that ECMAScript should have this feature built-in?
A global sink has something absurd to it, what about adding an error 
sink feature to module loaders? cc'ing ES6 Module folks


Ideally, the ECMAScript error sink would handled both uncaught thrown 
errors and unhandled promise errors.


David

[1] http://nodejs.org/api/domain.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ECMAScript error sink

2013-11-13 Thread David Bruant

Le 13/11/2013 08:11, Boris Zbarsky a écrit :

On 11/13/13 10:58 AM, David Bruant wrote:

I'm sympathetic with this use case, but Weakrefs seem like the wrong
tool to solve this problem.


I think I agree on that.


Ideally, the ECMAScript error sink would handled both uncaught thrown
errors and unhandled promise errors.


Defining unhandled promise error is not trivial, actually, unless 
you just mean rejected promise that no one ever sets any reject 
callbacks on.
That would be my definition. no one ever sets any reject callback on 
is itself undecidable (the ever part), but I feel it works well enough 
in practice. Cases where it doesn't, people have memory leaks. 
Domain#intercept which looks at the Node error convention (error in  
async callback first argument) certainly suffer from the same issue, but 
looks practical enough. I lack the experience with Node domains. If some 
have it, it'd be interesting to share it.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak callbacks?

2013-11-12 Thread David Bruant

Le 12/11/2013 10:18, Pierre Frisch a écrit :

Could I present another example where WeakRef are required.

I am writing an object management layer similar to what exist in other language 
and systems. One of the requirement is uniquing so that each object exist only 
once in memory. Let say I retrieve a company and its employees and later 
retrieve a company project and its participants. I need the members of the 
employee list and the members of the participant list to point to the same 
objects. The other requirement is not to leak, if an object is only accessible 
from within the management layer it should be candidate for garbage collection.
Your description suggests that there is an external source of truth 
dictating which objects are expected to be the same. I imagine ids in a 
database.
I feel your use case is very close what has been covered so far with 
attempts to reimplement CapTP or the Cap'n'proto protocol in JS. These 
contain a form of object management layer like the one you describe. 
Among other things, they preserve object identity within one Vat.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generator Arrow Functions

2013-11-12 Thread David Bruant

Le 12/11/2013 18:30, Axel Rauschmayer a écrit :
This is relevant, too: 
http://esdiscuss.org/topic/function-declarations-with-lexical-this


I'd still argue that generator arrow functions make more sense than 
generator function declarations.
I don't have a strong opinion in this debate, but I've seen something 
relevant in Angus Croll's slides [1] recently:


  let idGenerator = (id=0) = () = id++;

  let nextFrom1000 = idGenerator(1000);
  nextFrom1000(); // 1000
  nextFrom1000(); // 1001

David

[1] https://speakerdeck.com/anguscroll/es6-uncensored?slide=42
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak callbacks?

2013-11-12 Thread David Bruant

Le 12/11/2013 13:42, Kris Kowal a écrit :
One of the concerns with promises is that they consume exceptions that 
may or may not be handled. I have been looking forward for WeakRef as 
one of the avenues available to mitigate this problem. A post-mortem 
finalizer would be able to surface an error that was trapped by a 
promise or promises that were eventually garbage collected, and 
therefore provably never-to-be-handled.


It is true that this problem can be decisively mitigated in other 
ways, like requiring a promise to forward to a terminal done() in 
the same turn of the event loop, but I find this particular solution 
unpalatable. I do find a promise inspector compelling, one that will 
show an error until it is handled, but even in this case, I think it 
is compelling to visually elevate an unhandled error to a provably 
never-to-be-handled error, and this is not possible, at least outside 
chrome-space, without WeakRef.
I understand the need to know when a promise has an unhandled error at 
development time, I'm less clear on why you need to know it at runtime. 
Why would you do with this information? handle the error?
If you think of wrapping promises in weakrefs, why not just add error 
handling?

To me, it looks like the same amount of effort.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: an idea for replacing arguments.length

2013-11-10 Thread David Bruant

Le 10/11/2013 19:12, Allen Wirfs-Brock a écrit :
One of the the few remaining uses of a function's 'arguments' binding 
is to determine the actual number of passed arguments.  This is 
necessary in some overloading scenarios where a function has different 
behavior when an argument is completely absent then it has when 
undefined (or any other default value) is explicitly passed in that 
parameter position.  That situation occurs in a number of DOM APIs and 
even a few ES library functions.


For example(see https://bugs.ecmascript.org/show_bug.cgi?id=1877 ), 
Array.prototype.splice returns different results for:

   [1,2,3].splice()
and
   [1,2,3].splice(undefined)

The natural ES6 declaration for a splice function is:

   function splice(start, deleteCount, ...items) {...

but if you write it this way then within the body you have to have a 
test like:


if (arguments.length == 0) {...

to implement the correct  web-compatable behavior.

Or, alternatively you could declare the functions as:

function splice(...actualArgs) {
 let [start, stop, ...item] = actualArgs;
 ...
 if (actualArgs.length == 0) {...

So, to implement a Web-compaable version of splice you either have to 
use 'arguments' to determine the actual number of passed objects or 
you need to declare it with a bogus parameter pattern and use explicit 
or implicit destructuring to parse out the positional parameters.
I imagine it also breaks splice.length, but that's fixed by making 
length configurable (writable? I don't remember).

I'm fine with the second solution. It's inelegant, but it's also legacy...
I don't think differenciating .splice() and .splice(undefined) and 
equivalent use cases is a practice that should be encouraged.


One way around this dilemma would be to provide a syntactic affordance 
for determing the actual argument count.  For example, one possibility 
would be to allow the last item of any formal parameter list to be an 
item of the syntactic form:


ActualArgumentCount : '#' BindingIdentifier

So, the declaration for splice could then be:

   function splice(start, deleteCount, ...items, #argCount) {
  ...
  if (argCount == 0) {...

Thoughts?

Why creating something new if it's only encouraging a bad practice?
Is there a good use case?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: an idea for replacing arguments.length

2013-11-10 Thread David Bruant

Le 10/11/2013 22:19, Brendan Eich a écrit :

On Nov 10, 2013, at 9:12 PM, Andrea Giammarchi andrea.giammar...@gmail.com 
wrote:

Not sure why this is so needed though.

Allen's posts make the case: webidl and varargs-style functions. Not all legacy.
WebIDL creates spec, not code. The language syntax doesn't need to 
evolve for that. Allen showed that rest params+destructuring allows 
self-hosting without |arguments|

Varargs functions have rest parameters.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: an idea for replacing arguments.length

2013-11-10 Thread David Bruant

Le 10/11/2013 22:30, K. Gadd a écrit :
JSIL and embind both need arguments.length for efficient method call 
dispatch when dealing with overloaded functions. Is it your intent 
that all such scenarios must now pay the cost of creating an array (to 
hold the rest arguments) and then destructuring it, for every call? At 
present it's possible to avoid this overhead in V8 and SpiderMonkey by 
using arguments.length + arguments[n] or by using arguments.length + 
patterned argument names.
The array created by rest arguments has no reason to cost more than the 
arguments object. It's only an implementation concern.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: an idea for replacing arguments.length

2013-11-10 Thread David Bruant

Le 10/11/2013 22:42, Brendan Eich a écrit :

On Nov 10, 2013, at 9:24 PM, David Bruant bruan...@gmail.com wrote:

WebIDL creates spec, not code. The language syntax doesn't need to evolve for 
that.

Wrong, webidl and jsidl (and jsil and embind) are both interface and a bit of 
implementation. Testing argc != testing sentinel value. Two different 
semantics, plausibly deserving fast and terse syntax.
One of the semantics is admitted as a bad practice. I still don't 
understand why it should be encouraged.
Other use cases are compile-to-JS use case. Can implementations optimize 
the pattern Allen showed in his initial post?

function splice(...actualArgs) {
let [start, stop, ...item] = actualArgs;
...
if (actualArgs.length == 0) {...


Allen showed that rest params+destructuring allows self

Read Allen's replies, stop ignoring known counter-arguments.

Not my intention, sorry it came out this way, I had missed a few posts.

Allen wrote:

So, if a lot of DOM APIs need to be implemented as function (...args) {

then that is likely what will appear in documentation.
As Mark said, generate doc from WebIDL. Else, MDN and WPD are 
CC-licenced wikis :-)


Also note that there is likely to be actual computational overhead in 
both creating a rest argument and in destructuring it.  In some cases, 
that overhead may be an issue.

Can implementations optimize this pattern to remove this overhead?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: an idea for replacing arguments.length

2013-11-10 Thread David Bruant

Le 10/11/2013 23:34, Brendan Eich a écrit :

Dmitry Soshnikov wrote:
Moreover, for this particular `splice` example, I don't think the 
`(start, deleteCount, ...rest)` is the best signature (not to say, 
incorrect signature). As again was mentioned, a var-args function 
seems should just use the `...rest` params, and exactly starting from 
the position when the first optional argument is started. And if it's 
started right from the position 0 (as with the `splice`), then 
probably the more natural signature would be the `(...args)`.


This gives the wrong function.length result, though (as Allen pointed 
out).
I wrote in an earlier message that function length is writable, but I 
was confusing with function name... Sorry about that.

Would it make sense to make function length writable?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak callbacks?

2013-11-08 Thread David Bruant

Le 08/11/2013 17:09, Jason Orendorff a écrit :

(As a bonus, the weirdness will happen in one implementation and not
another, and you and your users will blame the implementation. So
there will be pressure on implementers to reduce the nondeterminism by
doing GC more frequently—which trades off against other performance
measures.)

Super-bonus: Heisenbugs (bugs that happen in prod, but not while debugging)
https://en.wikipedia.org/wiki/Heisenbug


And in this case, it all seems unnecessary. There is apparently
already explicit code for both removing B and C, and later coping with
their disappearance (since the weak reference may go null). That code
could just as easily set a bit on B and C marking them as removed, and
then test that in the chasing code.

Agreed. In a way, Kevin conceded it when he wrote in an earlier message:
I had to manually remove the event listeners at an appropriate time 
(and finding an appropriate time can be difficult!)
And this looks very much like a software engineering issue, not a 
language issue. Maybe we (JavaScript developers!) should invest in 
better memory tooling see how far it gets us. We have fantastic tooling 
for studying time perf (Chrome has 2 types of profilers and the timeline 
view to help with the 60fps, Firefox and IE11 getting there too), how 
come we're still doing low-level heap snapshots for memory perf? Is 
space fundamentally that much harder to study than time?


Taking the tooling road first, worst case, we throw the tooling away... 
not an option when a feature is in the wild.


Let's try at least?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak callbacks?

2013-11-08 Thread David Bruant

Le 08/11/2013 20:35, Mark S. Miller a écrit :

Please try -- such experiments are interesting.

I am :-)

But even if this experiment is successful, I hope and expect that 
we'll have weakrefs and post-mortem finalization in ES7. They are 
needed for many other things, such as distributed acyclic garbage 
collection (as in adapting the CapTP ideas to distributed JS).

yes...
Speaking of which, could you explain the use of proxyRef.get from the 
related example? [1]
At least in the executor function, I don't understand how the object 
could be still alive and why the call is needed.


David

[1] 
http://wiki.ecmascript.org/doku.php?id=strawman:weak_references#distributed_acyclic_garbage_collection

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak callbacks?

2013-11-07 Thread David Bruant

Le 07/11/2013 22:46, K. Gadd a écrit :
That's the sort of obstacle that factors into a developer's choice of 
language and toolset. I've seen this particular concern with ES crop 
up in the past on real projects, and I've seen firsthand how difficult 
it is to avoid uncollectable cycles in a language environment without 
any sort of weak reference mechanism. Leaking large uncollectable 
cycles can have catastrophic consequences in multimedia applications 
and games, where those cycles might be retaining images or sounds or 
other huge game assets.
The repeated use of the word cycle worries me. Cycles aren't a problem 
by themselves with mark and sweep, do we agree?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Cross-global instanceof

2013-11-02 Thread David Bruant

Le 02/11/2013 03:13, Allen Wirfs-Brock a écrit :

On Nov 1, 2013, at 6:05 PM, David Bruant wrote:

I'm not sure about proxy returning Proxy as tag name. Is that a good idea? 
Brand feels like something that could safely transparently cross proxies.

There is a note on in the ES6 draft on that Proxy case of O.P.toStirng that says: This could 
be used an isProxy test.  Do we really want that?  Nobody has answered that question yet?  
What do you mean by band transmitted accross proxies.  ES6 has no general concept of 
brand.

lousy language, my mistake. I meant @@toStringTag.


We could handle that case by internally doing O.p.toString.cal(this.[[target]]) 
for the proxy case.  Or we could just turn it into this.toString().  But 
neither of those seem particularly correct, in general.

Or we could simply not special case Proxy exotic objects and then Proxies would 
be handled like any other object, the value of the objects @@toStringTag 
property would be accessed and used to compose the toString result.
Or what about a third optional argument to the Proxy constructor to set 
the @@toStringTag?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Cross-global instanceof

2013-11-01 Thread David Bruant

Le 02/11/2013 01:08, Brandon Benvie a écrit :

On 11/1/2013 4:59 PM, Brandon Benvie wrote:

On 11/1/2013 4:31 PM, Brandon Benvie wrote:

In the spec for Object.prototype.toString:

'If tag is any of Arguments, Array, Boolean, Date, Error, 
Function, Number, RegExp, or String and SameValue(tag, 
builtinTag) is false, then let tag be the string value ~ 
concatenated with the current value of tag.'


An interesting consequence of this is that a Proxy for any of these 
will default to being ~ + target class. So 
`Object.prototype.toString.call(new Proxy([], {}))` is [object 
~Array]. But it seems the shipped has already sailed on Proxies 
being conspicuously not interchangeable with their targets in many 
cases...


Actually that's incorrect. Proxies explicitly will return Proxy for 
their tag. Same problem though.

In what other ways the ship has sailed?
At least regular objects and arrays can be faithfully interchanged I 
think, no? Things get complicated with Date/WeakMap/etc because of 
private state, but I remain hopeful a solution can be found in the ES7 
timeframe (or whatever the next iteration is called).


I'm not sure about proxy returning Proxy as tag name. Is that a good 
idea? Brand feels like something that could safely transparently cross 
proxies.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Cross-global instanceof

2013-10-31 Thread David Bruant

Le 31/10/2013 16:38, Anne van Kesteren a écrit :

This keeps coming up. Last instance:
http://mxr.mozilla.org/mozilla-central/source/dom/base/ObjectWrapper.jsm#16

We have it for Array using Array.isArray().

Array.isArray is not at all equivalent to instanceof. Not even related.
Object.create(Array.prototype) instanceof Array === true

var a = [];
a.__proto__ = null;
Array.isArray(a) === true;

We need both types of checks, one for is such object in the prototype 
chain? and the other for how is this object magic? (Array, Date, 
WeakMap, File, whatev's). The source code you're linking to seems to 
want the latter.



It is unclear why the
arguments for arrays not apply to other types of objects, such as
array buffers, nodes, blobs, files, etc.

We could introduce something like

   Object.crossGlobalInstanceOf(instance, type)

which checks @@crossGlobalBrand or some such which works for built-ins
and is also usable by jQuery and the like.
I'm not sure it's worth making it work for jQuery. This is trying to 
make a good use of same-origin multi-global which shouldn't exist in the 
first place. Keeping same-origin access as it is and encouraging people 
to add @sandbox even on same-origin iframes seems like a better idea.


Should the addition be a nicer Object.prototype.toString.call?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.create() VS functions

2013-10-26 Thread David Bruant

Le 26/10/2013 15:44, Michaël Rouges a écrit :

Bonjour à tous,

Bonjour,

`Knowing that every function is an object, I am surprised that the 
Object.create() method doesn't really allow cloning function.

I don't follow the logic of this sentence.
In any case, the purpose of Object.create is to create a normal object, 
that is an object as commonly understood when it comes to its own 
properties (no magic property like array's length), without private 
state (like Date objects) and that is not callable.

Also, Object.create does not create a clone, but a new object.


implementation
If you don't care about |this|, f2 = f.bind(undefined) can be considered 
as a way to clone a function.


  function f2(){
return f.apply(this, arguments);
  }
works too.

  f2 = new Proxy(f, {})
is a form of function cloning as well.

Very much like object cloning, function cloning does not have one unique 
definition.



Is there a reason not to do that, please?
I would ask the opposite question: is there a reason to do that? Usually 
features are added because there is a driving use case which you haven't 
provided.


Also, usually, changing the semantics of an existing built-in isn't a 
good idea given that it may break existing code relying on it.


Last, if you can implement it yourself, why do you need it to be part of 
the language? There are hundreds of convenience functions that could be 
added. Why this one more than others?


Thanks,

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: has the syntax for proxies been finalized ?

2013-10-18 Thread David Bruant

Le 18/10/2013 07:19, Angus Croll a écrit :
I couldn't find a commitment to a specific syntax in the latest ES6 
standard

The latest official news is in the May 2013 TC39 notes:
https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-05/may-21.md#44-proxies
The final design of proxies is the direct proxies design. As Tom said, 
a proxy is now created doing:

var p = Proxy(target, handler)

Proxy.create and Proxy.createFunction are aimed at disappearing.

Gecko, chrome experimental, traceur and 'node 
--harmony-proxies' support the Proxy.create syntax (detailed in 
http://wiki.ecmascript.org/doku.php?id=harmony:proxies)


e.g.
var proxy = Proxy.create({
 get: function(p, n) {
  return 'Hello ' + n;
 }
});
proxy.World //'Hello World'
On the SpiderMonkey (Gecko implements the DOM and other platform APIs 
and SpiderMonkey is the part that implement ECMAScript) side, I filed a 
bug to get rid of these as it's indeed confusing to have both APIs 
exposed in web pages:

https://bugzilla.mozilla.org/show_bug.cgi?id=892903

IIRC, the V8 team had started implementing something (behind a flag), 
and then wars on Proxy design happened, so they chose to wait for the 
design to stabilize. Now may be a good time to restart



However MDN calls the above the 'Old Proxy API'.
I'm glad I succeeded in, at least, making people wonder what that was 
all about :-)


Since I've been following closely the design of proxies, I documented 
them on MDN. Especially after the implementation of direct proxies in 
Firefox (where I moved the documentation of the previous API to its own 
page and try to explain the best I could that people should not use it). 
I'm happy to improve the doc if something isn't clear (on the feature 
itself or clarify the current technico-social mess of different APIs in 
the wild).


As a side note, to my knowledge, the only native implementation of 
direct proxies is in Firefox, but it's incomplete and has known bugs. 
You can see the known limitations and bugs here: 
https://bugzilla.mozilla.org/showdependencytree.cgi?id=703537hide_resolved=1 
(depends on section. Bug 787710 is particularly funny :-)).


If you want to play with proxies, I think that the most 
faithful-to-the-spec implementation is Tom's polyfill: 
https://github.com/tvcutsem/harmony-reflect/blob/master/reflect.js where 
he's using the old API where available to implement the new one.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-15 Thread David Bruant

Le 14/10/2013 23:25, Brendan Eich a écrit :

Jorge Chamorro wrote:

The only work around for that is making as few requests as possible.


+∞, +§, and beyond.

This is deeply true, and a hot topic with browser/network-stack 
engineers right now.
It ought to be with software engineers as well and it's one of the 
reason why promise pipelining [1] is so appealing.


David

[1] http://erights.org/elib/distrib/pipeline.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-15 Thread David Bruant

Le 14/10/2013 23:20, Jorge Chamorro a écrit :

On 14/10/2013, at 22:11, David Bruant wrote:


You already can with inlining, can't you?

Yes and no:

-It's much more complicated than pre zipping a bunch of files and adding a ref 
attribute.
-It requires additional logic at the server side, and more programming.
Not really. If there was a need for lots of people, people would have 
come up with an open source grunt task already (or any other open source 
tooling).
The fact that people haven't tried too hard may also be an indication 
that bundling isn't such a pressing need.


With the appropriate tooling, it could be as simple to inline in an HTML 
as it is to gzip (2 clicks for each).


With tooling being such a hot topic these days (so many talks on tooling 
and automation in confs!) and the MIT-licence culture around it, I feel 
we, web devs, should start considering asking less from the platform and 
more from the tooling.



-It's not trivial always: often you can't simply concatenate and expect it to 
work as-is (e.g. module scripts).
-You might be forcing the server to build and/or gzip (á la PHP) on the fly = 
much more load per request.

This is equally true from zip-bundling, no?


-Inlined source isn't always semantically === non-inlined source = bugs.

True. It's admittedly easy to escape with decent discipline.


It would also be very interesting to know if you had .zip packing, would you be 
inlining?

... yeah ... good point, I probably wouldn't :-)

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread David Bruant

Le 14/10/2013 15:16, Anne van Kesteren a écrit :

The idea is to use a somewhat more unique separator, e.g. $sub/. Old
browsers would simply fetch the URL from the server and if the server
is written with legacy in mind would serve up the file from there. New
browsers would realize it's a separator and fetch the URL up to the
separator and then do addressing within the zip archive once its
retrieved.

https://gist.github.com/wycats/220039304b053b3eedd0 has a more
complete summary of our current thinking. (Not entirely up to date.)
I feel this document lacks a use case/problem/rationale section. It'd 
also be interesting to explore how people solve the same problem today 
(inlining mostly) and explain why this proposal is significantly (!) 
better (I doubt it is, but I'm always open to being proven wrong).


From what I understand, the problem being solved by bundling is faster 
initial load times (feel free to correct me at this point).


Back to something Brendan said:
I agree with your approach that values ease of content-only (in the 
HTML, via script src= ref=) migration. I think David and others 
pointing to HTTP 2 undervalue that. 
Recently, a friend of mine had a performance problem on his blog. It's a 
Wordpress blog on an average hosting service, nothing fancy. The landing 
page was loading in 14sec. He applied a few tricks (he's not a web dev, 
so nothing too fancy), got a CDN wordpress plugin, async-loaded facebook 
and other social widgets and now the page loads in 4.5 secs and has 
something on screen within about 2sec.
There are 68 requests, 1.2Mb (!) of content downloaded, but it works. 
There are also lots of inline scripts in the middle of the page and that 
creeps me out and makes me want to murder a couple of people who work on 
Wordpress themes... but it works. The web works.
And that's a already semi-complex site. I imagine things to only be 
better with content-only websites. How much are we trying to save with 
the bundling proposal? 200ms? 300ms? Is it really worth it? I feels like 
we're trying to solve a first-world problem.


I feel that before adding new syntax and complexifying yet again the 
platform, a thorough performance study should be made to be sure it'll 
be significantly better than what we do today with inlining.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


  1   2   3   4   5   6   7   8   9   10   >