Re: Generator Arrow Functions

2013-11-18 Thread Ѓорѓи Ќосев
On 11/15/2013 06:18 PM, Claude Pache wrote:

 Le 15 nov. 2013 à 17:59, Rick Waldron waldron.r...@gmail.com
 mailto:waldron.r...@gmail.com a écrit :


 On Fri, Nov 15, 2013 at 11:34 AM, Axel Rauschmayer a...@rauschma.de
 mailto:a...@rauschma.de wrote:

 (...)

 That would make the async programming code more compact, too (I’m
 assuming a nullary paren-free arrow variant and I prefer the
 asterisk after the arrow):


 To be clear, this preference is inconsistent with all other generator
 forms where the asterisk is before the params, per Brandon's original
 examples.

 The other point of view is that this preference is consistent with
 other generator forms where the asterisk is after the token that
 defines the general role of the construct as a procedure (either the
 `function` keyword, or the `=` token). Personally, I tend to read
 `function*`as a unit meaning generator function, and so would I for
 `=*`.

There is one reason (that is beyond bikeshedding) why I think the star
should be together with the arrow. (I have no opinion whether it should
be before or after the arrow though)

Its harder to scan whether this is a generator arrow function or a
normal arrow function because the star is too far away:

someFunction(*(someArgument, anotherArgument) = {
... code ... 
});

compared to this form, where its immediately obvious that this is not a
regular function, just by looking at the composed symbol (arrow-star)

someFunction((someArgument, anotherArgument) =* {
... code ... 
});


The arbitrary length of arguments, as well as the ability to split them
to multiple lines makes it potentially even harder to reliably locate
the star:

someFunction(*(argument,
  anotherArgument,
  somethingThird) = {
  ... code ...
});

vs

someFunction((argument,
 anotherArgument,
 somethingThird) =* {
  ... code ...
});

Here is the single-argument example, for completeness:

someFunction(*singleArgument = yield asyncOp(singleArgument));

versus

someFunction(singleArgument =* yield asyncOp(singleArgument));

in which it looks like the star is somehow related to the argument.

If you're not convinced, you can try to find all the arrow-stars in the
above code, then try to find all the arrows and determine whether
they're generator arrows or regular arrows :)

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread Dmitry Lomov
We definitely are looking for feedback on the proposal! Please keep it
coming.
Here are some answers reflecting our current thinking.


On Sun, Nov 17, 2013 at 4:07 PM, K. Gadd k...@luminance.org wrote:

 Since the strawman is close to the final spec, questions/nitpicks:

 I noticed the current spec explicitly provides no control over element
 alignment/padding. Are there specific reasons behind that? It dramatically
 reduces the value of typed objects for doing file I/O (aside from the
 endianness problem, which actually doesn't matter in many of those cases),
 and it means they can't be used to provide direct compatibility with C
 struct layouts in specific cases - for example, filling vertex buffers. I
 understand that there is no desire to provide the *full* feature set
 available in C (fixed-size arrays, variable-size structs, etc.) but
 alignment/padding control is used quite a bit.

 DataView has significant performance issues (some due to immature
 v8/spidermonkey implementations, some due to the design) that make it
 unsuitable for most of these use cases, even if it's the 'right' way to
 handle endianness (disputable).

 The handwaving that WebGL implementations can 'just introspect' in these
 cases seems shortsighted considering the reality of WebGL: hundreds of
 shipped libraries and apps using current WebGL cannot be guaranteed to keep
 doing the right thing when interacting with typed arrays. If a typed array
 full of Typed Objects can still be treated like an array full of bytes or
 float32s, that allows existing WebGL code to keep working, as long as you
 ensure the layout of the objects is correct. That means people can start
 incrementally adding uses of Typed Objects to their code right away - and
 it means they can introduce them based on a polyfill of Typed Objects
 instead of waiting for the browser to implement *both* Typed Objects and
 new WebGL support for Typed Objects.


The idea for alignment/padding is to _specify_ it for typed objects. There
is a rule that tell the user how the typed object will be aligned and that
rule is set in stone. If the programmer declares a typed object, she/he
knows what the memory layout is. However, we do not (at least in the first
version) provide any explicit API for changing the alignment.

The rule is Each field is padded to reside at a byte offset that is a
multiple of the field type’s byte alignment (specified below via the
[[ByteAlignment]] internal property). The struct type’s byte length is
padded to be a multiple of the largest byte alignment of any of its
fields.  So every field is at its natural boundary, and the natural
boundary of the struct is the largest of the natural boundaries of its
field types. This rule is pretty much what C compilers do anyway.

This appears to us a good compromise between API and implementation
complexity and expressiveness.

BTW, DataView are indeed implemented poorly performance-wise, at least in
V8. It is on my short term list to fix this in V8.




 My primitives have control over alignment/padding and it doesn't seem to
 be that hard to implement (in JS, that is) - are there things that make
 this hard to provide from inside a VM? Being able to add extra padding, at
 least, would be pretty useful even if alignment has to remain locked to
 whatever the requirements are.

 I see reference types are exposed (string, object, any) - the way this
 actually works needs to be clearly stated. Is it storing a GC pointer into
 the buffer? Are there safety concerns if it's overwritten, or loaded from a
 json blob or something else like that? How big are string/object/any in the
 internal representation? Does their size depend on whether the running
 browser is 32-bit or 64-bit?


Typed objects that have 'string', 'object' or 'any' fields are
non-transparent. It means that there is no way for the programmer to get
at their underlying storage. This is enforced by the spec.

Thanks and hope this helps,
Dmitry



 I'd be open to collaborating on a polyfill of Typed Objects once it's
 clear how they actually work. We can repurpose JSIL's existing
 implementation and modify it to get the semantics in the spec.


 On Sun, Nov 17, 2013 at 5:04 AM, Till Schneidereit 
 t...@tillschneidereit.net wrote:

 On Sun, Nov 17, 2013 at 10:23 AM, K. Gadd k...@luminance.org wrote:

 Are there any known-good polyfills for the current draft Typed Objects /
 Binary Data spec?


 I want this, too, and will start working on it soon-ish if nobody else
 does or already did.


 Presently, JSIL has a set of primitives that roughly correspond with a
 big chunk of the draft specification. I'm interested in seeing whether they
 can work atop ES6 typed objects, which means either adapting it to sit on
 top of an existing polyfill, or turning my primitives into a polyfill for
 the draft spec. If it's useful I might be able to find time for the latter
 - would having a polyfill like that be useful (assuming a good one doesn't
 already exist)?

 

Re: Modules vs Scripts

2013-11-18 Thread Anne van Kesteren
On Sun, Nov 17, 2013 at 2:23 PM, Matthew Robb matthewwr...@gmail.com wrote:
 I like it. If there is a desire to stay closer to script I could see
 script module and script module=

Has to be script type=module otherwise it would execute in legacy clients.


-- 
http://annevankesteren.nl/
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread K. Gadd
(Re: minor off-list clarification on non-transparent objects)

I see, it looks like this is the relevant bit of the strawman:

There are three built-in types that are considered *opaque*: Object, string,
and Any. For security, they are not allowed to expose their internal
storage since they may contain pointers (see below). A struct or array type
is opaque if it contains opaque fields or elements, respectively.

A type that is not opaque is *transparent*.
Overlooked it on my first read-through since it isn't directly referenced
elsewhere. That seems like it addresses the problem.


Looking at the strawman and the polyfill and the spidermonkey
implementation, each one seems to have a different API for arrays. Is this
something that will get standardized later? I've seen get, getItem and []
as 3 different ways to read values out of a typed object array; [] seems
like an obvious poor choice in terms of being able to polyfill, but I can
see how end users would also want typed object arrays to act like actual
arrays.

The spidermonkey implementation seems to expose 'fieldNames' in addition to
fieldOffsets/fieldTypes for reflection, which seems like a good idea.


If DataView were to also get optimized in SpiderMonkey, that would release
a lot of the pressure (use-case wise) for Typed Objects to expose
fine-grained control over alignment/padding and it would make it less
immediately necessary for them to exist. That's probably a good thing.


What is the intended use scenario when trying to pass a typed object array
to WebGL? Pass the array's .buffer where a typed array would normally be
passed? Or is it basically required that WebGL be updated to accept typed
object arrays? It's not totally clear to me whether this will work or if
it's already been figured out.


The elementType property on typed object arrays is a great addition; I'd
suggest that normal typed arrays also be updated to expose an elementType.
i.e. (new Uint8Array()).elementType === TypedObject.uint8



On Mon, Nov 18, 2013 at 2:35 AM, Dmitry Lomov dslo...@chromium.org wrote:

 We definitely are looking for feedback on the proposal! Please keep it
 coming.
 Here are some answers reflecting our current thinking.


 On Sun, Nov 17, 2013 at 4:07 PM, K. Gadd k...@luminance.org wrote:

 Since the strawman is close to the final spec, questions/nitpicks:

 I noticed the current spec explicitly provides no control over element
 alignment/padding. Are there specific reasons behind that? It dramatically
 reduces the value of typed objects for doing file I/O (aside from the
 endianness problem, which actually doesn't matter in many of those cases),
 and it means they can't be used to provide direct compatibility with C
 struct layouts in specific cases - for example, filling vertex buffers. I
 understand that there is no desire to provide the *full* feature set
 available in C (fixed-size arrays, variable-size structs, etc.) but
 alignment/padding control is used quite a bit.

 DataView has significant performance issues (some due to immature
 v8/spidermonkey implementations, some due to the design) that make it
 unsuitable for most of these use cases, even if it's the 'right' way to
 handle endianness (disputable).

 The handwaving that WebGL implementations can 'just introspect' in these
 cases seems shortsighted considering the reality of WebGL: hundreds of
 shipped libraries and apps using current WebGL cannot be guaranteed to keep
 doing the right thing when interacting with typed arrays. If a typed array
 full of Typed Objects can still be treated like an array full of bytes or
 float32s, that allows existing WebGL code to keep working, as long as you
 ensure the layout of the objects is correct. That means people can start
 incrementally adding uses of Typed Objects to their code right away - and
 it means they can introduce them based on a polyfill of Typed Objects
 instead of waiting for the browser to implement *both* Typed Objects and
 new WebGL support for Typed Objects.


 The idea for alignment/padding is to _specify_ it for typed objects. There
 is a rule that tell the user how the typed object will be aligned and that
 rule is set in stone. If the programmer declares a typed object, she/he
 knows what the memory layout is. However, we do not (at least in the first
 version) provide any explicit API for changing the alignment.

 The rule is Each field is padded to reside at a byte offset that is a
 multiple of the field type’s byte alignment (specified below via the
 [[ByteAlignment]] internal property). The struct type’s byte length is
 padded to be a multiple of the largest byte alignment of any of its
 fields.  So every field is at its natural boundary, and the natural
 boundary of the struct is the largest of the natural boundaries of its
 field types. This rule is pretty much what C compilers do anyway.

 This appears to us a good compromise between API and implementation
 complexity and expressiveness.

 BTW, DataView are indeed 

Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread Dmitry Lomov
On Mon, Nov 18, 2013 at 12:07 PM, K. Gadd k...@luminance.org wrote:

 (Re: minor off-list clarification on non-transparent objects)

 I see, it looks like this is the relevant bit of the strawman:

 There are three built-in types that are considered *opaque*: Object,
 string, and Any. For security, they are not allowed to expose their
 internal storage since they may contain pointers (see below). A struct or
 array type is opaque if it contains opaque fields or elements,
 respectively.

 A type that is not opaque is *transparent*.
 Overlooked it on my first read-through since it isn't directly referenced
 elsewhere. That seems like it addresses the problem.


 Looking at the strawman and the polyfill and the spidermonkey
 implementation, each one seems to have a different API for arrays. Is this
 something that will get standardized later? I've seen get, getItem and []
 as 3 different ways to read values out of a typed object array; [] seems
 like an obvious poor choice in terms of being able to polyfill, but I can
 see how end users would also want typed object arrays to act like actual
 arrays.


We will definitely support []. get/set will not do because of typed
array's existing set method that does something completely different
(namely, memcpy). We might consider setItem/getItem - in the polyfill,
these are considered internal methods.


 The spidermonkey implementation seems to expose 'fieldNames' in addition
 to fieldOffsets/fieldTypes for reflection, which seems like a good idea.



Agreed, fill free to file issue (or a patch!) against the polyfill :)




 If DataView were to also get optimized in SpiderMonkey, that would release
 a lot of the pressure (use-case wise) for Typed Objects to expose
 fine-grained control over alignment/padding and it would make it less
 immediately necessary for them to exist. That's probably a good thing.


Agreed, that is partly my motivation for prioritizing this work in V8.




 What is the intended use scenario when trying to pass a typed object array
 to WebGL? Pass the array's .buffer where a typed array would normally be
 passed? Or is it basically required that WebGL be updated to accept typed
 object arrays? It's not totally clear to me whether this will work or if
 it's already been figured out.


I have some experiments lying around that wrap WebGL apis to make them
understand typed objects - I'll polish them and post as samples.
Fundamentally, the introspection APIs on typed objects should be enough to
use typed objects with WebGL, but probably eventually WebGL API will use
typed objects directly.



 The elementType property on typed object arrays is a great addition; I'd
 suggest that normal typed arrays also be updated to expose an elementType.
 i.e. (new Uint8Array()).elementType === TypedObject.uint8


The idea is that all typed arrays will be special cases of typed objects,
so that Uint8Array = new ArrayType(uint8).

Thanks,
Dmitry





 On Mon, Nov 18, 2013 at 2:35 AM, Dmitry Lomov dslo...@chromium.orgwrote:

 We definitely are looking for feedback on the proposal! Please keep it
 coming.
 Here are some answers reflecting our current thinking.


 On Sun, Nov 17, 2013 at 4:07 PM, K. Gadd k...@luminance.org wrote:

 Since the strawman is close to the final spec, questions/nitpicks:

 I noticed the current spec explicitly provides no control over element
 alignment/padding. Are there specific reasons behind that? It dramatically
 reduces the value of typed objects for doing file I/O (aside from the
 endianness problem, which actually doesn't matter in many of those cases),
 and it means they can't be used to provide direct compatibility with C
 struct layouts in specific cases - for example, filling vertex buffers. I
 understand that there is no desire to provide the *full* feature set
 available in C (fixed-size arrays, variable-size structs, etc.) but
 alignment/padding control is used quite a bit.

 DataView has significant performance issues (some due to immature
 v8/spidermonkey implementations, some due to the design) that make it
 unsuitable for most of these use cases, even if it's the 'right' way to
 handle endianness (disputable).

 The handwaving that WebGL implementations can 'just introspect' in these
 cases seems shortsighted considering the reality of WebGL: hundreds of
 shipped libraries and apps using current WebGL cannot be guaranteed to keep
 doing the right thing when interacting with typed arrays. If a typed array
 full of Typed Objects can still be treated like an array full of bytes or
 float32s, that allows existing WebGL code to keep working, as long as you
 ensure the layout of the objects is correct. That means people can start
 incrementally adding uses of Typed Objects to their code right away - and
 it means they can introduce them based on a polyfill of Typed Objects
 instead of waiting for the browser to implement *both* Typed Objects and
 new WebGL support for Typed Objects.


 The idea for alignment/padding is to 

Re: BOMs

2013-11-18 Thread Bjoern Hoehrmann
* Martin J. Dürst wrote:
As for what to say about whether to accept BOMs or not, I'd really want 
to know what the various existing parsers do. If they accept BOMs, then 
we can say they should accept BOMs. If they don't accept BOMs, then we 
should say that they don't.

Unicode signatures are not useful for application/json resources and are
likely to break exisiting and future code, it is not at all uncommon to
construct JSON text by concatenating, say, string literals with some web
service response without passing the data through a JSON parser. And as
RFC 4627 makes no mention of them, there is little reason to think that
implementations tolerate them.

Perl's JSON module gives me

  malformed JSON string, neither array, object, number, string
  or atom, at character offset 0 (before \x{ef}\x{bb}\x{bf}[])

Python's json module gives me

  ValueError: No JSON object could be decoded

Go's encoding/json module gives me

  invalid character 'ï' looking for beginning of value

http://shadowregistry.org/js/misc/#t2ea25a961255bb1202da9497a1942e09 is
another example of what kinds of bugs await us if we were to specify the
use of Unicode signatures for JSON, essentially

  new DOMParser().parseFromString(\uBBEF\u3CBF\u7979\u3E2F,text/xml)

Now U+BBEF U+3CBF U+7979 U+3E2F is not an XML document but Firefox and
Internet Explorer treat it as if it were equivalent to yy/.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules vs Scripts

2013-11-18 Thread Sam Tobin-Hochstadt
On Nov 18, 2013 7:17 AM, Brian Di Palma off...@gmail.com wrote:

 Correct me if I'm wrong David but module import/exports are not
 dynamic which allows tooling/IDEs to be more intelligent and
 programmers to easily understand what dependencies a module
 has/provides?

Yes, that's correct, if I guess correctly at what you mean by dynamic.

Sam

 On Sat, Nov 16, 2013 at 8:30 AM, David Herman dher...@mozilla.com wrote:
  On Nov 16, 2013, at 3:32 AM, John Barton johnjbar...@google.com wrote:
 
  Could someone help me understand why two goals for parsing JS is a
good thing?
 
  Hm, it sounds like you've assumed a conclusion already. Let me try to
explain anyway.
 
  Scripts are for synchronous loading and evaluation; modules are for
asynchronous loading and evaluation. The former is not allowed to import,
which triggers I/O. The latter is allowed to import. There was always going
to have to be a syntactic split.
 
  But there's more. I'll be explaining at this week's TC39 meeting some
more details about the intended model for browser integration. In short,
declarative code based on modules will use the module tag (or script
type=module, which will mean the same thing and allows polyfilling
today). This is better than script async, which was a hopeless idea. This
also works beautifully for fulfilling the promise of 1JS: a new, better
initial environment in which to run top-level application code without
requiring versioning.
 
  module
var x = 12; // local to this module, not exported on
global object
import $ from jquery; // asynchronous dependency, NBD
'$' in this // nope, because all globals are scoped
to this module
let y = 13; // yep, let works great because modules
are strict
  /module
 
  Same number of characters as script, better semantics, better global
environment, better language.
 
  If you're not excited about ES6 yet, get excited. :)
 
  Dave
 
  ___
  es-discuss mailing list
  es-discuss@mozilla.org
  https://mail.mozilla.org/listinfo/es-discuss
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: BOMs

2013-11-18 Thread Bjoern Hoehrmann
* Henry S. Thompson wrote:
I'm curious to know what level you're invoking the parser at.  As
implied by my previous post about the Python 'requests' package, it
handles application/json resources by stripping any initial BOM it
finds -- you can try this with

 import requests
 r=requests.get(http://www.ltg.ed.ac.uk/ov-test/b16le.json;)
 r.json()

The Perl code was

  perl -MJSON -MEncode -e
my $s = encode_utf8(chr 0xFEFF) . '[]'; JSON-new-decode($s)

The Python code was

  import json
  json.loads(u\uFEFF[].encode('utf-8'))

The Go code was

  package main
  
  import encoding/json
  import fmt
  
  func main() {
r := \uFEFF[]
  
var f interface{}
err := json.Unmarshal([]byte(r), f)

fmt.Println(err)
  }

In other words, always passing a UTF-8 encoded byte string to the byte
string parsing part of the JSON implementation. RFC 4627 is the only
specification for the application/json on-the-wire format and it does
not mention anything about Unicode signatures. Looking for certain byte
sequences at the beginning and treating them as a Unicode signature is
the same as looking for `/* ... */` and treating it as a comment.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules vs Scripts

2013-11-18 Thread Brian Di Palma
You can statically analyze the module text and enumerate the complete
imports and exports of the module without having to execute the module. The
execution of the module will not dynamically increase it's imports/exports.

For instance could you have a Math.random that decides if a dependency is
required?
On Nov 18, 2013 1:33 PM, Sam Tobin-Hochstadt sa...@cs.indiana.edu wrote:


 On Nov 18, 2013 7:17 AM, Brian Di Palma off...@gmail.com wrote:
 
  Correct me if I'm wrong David but module import/exports are not
  dynamic which allows tooling/IDEs to be more intelligent and
  programmers to easily understand what dependencies a module
  has/provides?

 Yes, that's correct, if I guess correctly at what you mean by dynamic.

 Sam

  On Sat, Nov 16, 2013 at 8:30 AM, David Herman dher...@mozilla.com
 wrote:
   On Nov 16, 2013, at 3:32 AM, John Barton johnjbar...@google.com
 wrote:
  
   Could someone help me understand why two goals for parsing JS is a
 good thing?
  
   Hm, it sounds like you've assumed a conclusion already. Let me try to
 explain anyway.
  
   Scripts are for synchronous loading and evaluation; modules are for
 asynchronous loading and evaluation. The former is not allowed to import,
 which triggers I/O. The latter is allowed to import. There was always going
 to have to be a syntactic split.
  
   But there's more. I'll be explaining at this week's TC39 meeting some
 more details about the intended model for browser integration. In short,
 declarative code based on modules will use the module tag (or script
 type=module, which will mean the same thing and allows polyfilling
 today). This is better than script async, which was a hopeless idea. This
 also works beautifully for fulfilling the promise of 1JS: a new, better
 initial environment in which to run top-level application code without
 requiring versioning.
  
   module
 var x = 12; // local to this module, not exported on
 global object
 import $ from jquery; // asynchronous dependency, NBD
 '$' in this // nope, because all globals are scoped
 to this module
 let y = 13; // yep, let works great because modules
 are strict
   /module
  
   Same number of characters as script, better semantics, better global
 environment, better language.
  
   If you're not excited about ES6 yet, get excited. :)
  
   Dave
  
   ___
   es-discuss mailing list
   es-discuss@mozilla.org
   https://mail.mozilla.org/listinfo/es-discuss
  ___
  es-discuss mailing list
  es-discuss@mozilla.org
  https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules vs Scripts

2013-11-18 Thread Sam Tobin-Hochstadt
Yes, exactly.

Of course, running a module can load other modules using the Loaders API,
but that doesn't add any new imports or exports to the module itself.

Sam
On Nov 18, 2013 8:54 AM, Brian Di Palma off...@gmail.com wrote:

 You can statically analyze the module text and enumerate the complete
 imports and exports of the module without having to execute the module. The
 execution of the module will not dynamically increase it's imports/exports.

 For instance could you have a Math.random that decides if a dependency is
 required?
 On Nov 18, 2013 1:33 PM, Sam Tobin-Hochstadt sa...@cs.indiana.edu
 wrote:


 On Nov 18, 2013 7:17 AM, Brian Di Palma off...@gmail.com wrote:
 
  Correct me if I'm wrong David but module import/exports are not
  dynamic which allows tooling/IDEs to be more intelligent and
  programmers to easily understand what dependencies a module
  has/provides?

 Yes, that's correct, if I guess correctly at what you mean by dynamic.

 Sam

  On Sat, Nov 16, 2013 at 8:30 AM, David Herman dher...@mozilla.com
 wrote:
   On Nov 16, 2013, at 3:32 AM, John Barton johnjbar...@google.com
 wrote:
  
   Could someone help me understand why two goals for parsing JS is a
 good thing?
  
   Hm, it sounds like you've assumed a conclusion already. Let me try to
 explain anyway.
  
   Scripts are for synchronous loading and evaluation; modules are for
 asynchronous loading and evaluation. The former is not allowed to import,
 which triggers I/O. The latter is allowed to import. There was always going
 to have to be a syntactic split.
  
   But there's more. I'll be explaining at this week's TC39 meeting some
 more details about the intended model for browser integration. In short,
 declarative code based on modules will use the module tag (or script
 type=module, which will mean the same thing and allows polyfilling
 today). This is better than script async, which was a hopeless idea. This
 also works beautifully for fulfilling the promise of 1JS: a new, better
 initial environment in which to run top-level application code without
 requiring versioning.
  
   module
 var x = 12; // local to this module, not exported
 on global object
 import $ from jquery; // asynchronous dependency, NBD
 '$' in this // nope, because all globals are scoped
 to this module
 let y = 13; // yep, let works great because modules
 are strict
   /module
  
   Same number of characters as script, better semantics, better
 global environment, better language.
  
   If you're not excited about ES6 yet, get excited. :)
  
   Dave
  
   ___
   es-discuss mailing list
   es-discuss@mozilla.org
   https://mail.mozilla.org/listinfo/es-discuss
  ___
  es-discuss mailing list
  es-discuss@mozilla.org
  https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread Niko Matsakis
On Mon, Nov 18, 2013 at 03:07:28AM -0800, K. Gadd wrote:
 If DataView were to also get optimized in SpiderMonkey, that would release
 a lot of the pressure (use-case wise) for Typed Objects to expose
 fine-grained control over alignment/padding and it would make it less
 immediately necessary for them to exist. That's probably a good thing.

Optimizing DataView in SM shouldn't be difficult. It hasn't been high
on my priority list but it seems fairly straightforward.


Niko
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread Niko Matsakis
On Sun, Nov 17, 2013 at 10:42:16PM +0100, Dmitry Lomov wrote:
 Typed Objects polyfill lives here: https://github.com/dherman/structs.js
 Dave and I work on it, current status is pretty close to strawman minus
 handles and cursors (which are a bit controversial at this point and as far
 as I understand are not is Firefox implementation).

One correction:

Handles are implemented in the SpiderMonkey implementation and are
being used in the PJS (nee Rivertrail) polyfill:
https://github.com/nikomatsakis/pjs-polyfill

My point of view on handles:

Their design is indeed controversial. I summarized the design
tradeoffs in a blog post [1]. After that post, Dmitry, Dave, and I
basically decided to exclude handles from the typed objects spec but
possibly to include them in other specs (such as PJS) that build on
typed objects.

As I see it, the reasoning for deferring handles is as follows:

1. Handles are not as important for performance as initially imagined.
   Basically, the original impetus behind handles was to give users an
   explicit way to avoid the intermediate objects that are created by
   expressions like `array[1].foo[3]`. But, at least in the SM
   implementation, these intermediate objects are typically optimized
   away once we get into the JIT. Moreover, with an efficient GC, the
   cost of such intermediate objects may not be that high. Given
   those facts, the complexity of movable and nullable handles doesn't
   seem worth it.

2. A secondary use case for handles is as a kind of revokable
   capability into a buffer, but for this to be of use we must make
   sure we get the details correct. For many advanced APIs, it can be
   useful to give away a pointer into a (previously unaliased) buffer
   and then be able to revoke that pointer, hence ensuring that the
   buffer is again unaliased. Use cases like this might justify
   movable and nullable handles, even if raw performance does not.

   However, in cases like these, the details are crucial. If we were
   to design handles in isolation, rather than in tandem with the
   relevant specs, we might wind up with a design that does not
   provide adequate guarantees. Also -- there may be other ways to
   achieve those same goals, such as something akin to the current
   neutering of buffers that occurs when a buffer is transferred
   between workers.


Niko
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread Dmitry Lomov
And Niko's excellent post on handles is here:
http://smallcultfollowing.com/babysteps/blog/2013/10/18/typed-object-handles/
Sorry for not referencing it in my reply!


On Mon, Nov 18, 2013 at 3:36 PM, Niko Matsakis n...@alum.mit.edu wrote:

 On Sun, Nov 17, 2013 at 10:42:16PM +0100, Dmitry Lomov wrote:
  Typed Objects polyfill lives here: https://github.com/dherman/structs.js
  Dave and I work on it, current status is pretty close to strawman minus
  handles and cursors (which are a bit controversial at this point and as
 far
  as I understand are not is Firefox implementation).

 One correction:

 Handles are implemented in the SpiderMonkey implementation and are
 being used in the PJS (nee Rivertrail) polyfill:
 https://github.com/nikomatsakis/pjs-polyfill

 My point of view on handles:

 Their design is indeed controversial. I summarized the design
 tradeoffs in a blog post [1]. After that post, Dmitry, Dave, and I
 basically decided to exclude handles from the typed objects spec but
 possibly to include them in other specs (such as PJS) that build on
 typed objects.

 As I see it, the reasoning for deferring handles is as follows:

 1. Handles are not as important for performance as initially imagined.
Basically, the original impetus behind handles was to give users an
explicit way to avoid the intermediate objects that are created by
expressions like `array[1].foo[3]`. But, at least in the SM
implementation, these intermediate objects are typically optimized
away once we get into the JIT. Moreover, with an efficient GC, the
cost of such intermediate objects may not be that high. Given
those facts, the complexity of movable and nullable handles doesn't
seem worth it.

 2. A secondary use case for handles is as a kind of revokable
capability into a buffer, but for this to be of use we must make
sure we get the details correct. For many advanced APIs, it can be
useful to give away a pointer into a (previously unaliased) buffer
and then be able to revoke that pointer, hence ensuring that the
buffer is again unaliased. Use cases like this might justify
movable and nullable handles, even if raw performance does not.

However, in cases like these, the details are crucial. If we were
to design handles in isolation, rather than in tandem with the
relevant specs, we might wind up with a design that does not
provide adequate guarantees. Also -- there may be other ways to
achieve those same goals, such as something akin to the current
neutering of buffers that occurs when a buffer is transferred
between workers.


 Niko

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread Niko Matsakis
On Sun, Nov 17, 2013 at 02:04:57PM +0100, Till Schneidereit wrote:
 The strawman at [1] is fairly close to what's going to end up in the spec,
 content-wise. Additionally, the implementation in SpiderMonkey is pretty
 complete by now, and there are lots of tests[2].

Indeed, it's approaching full functionality. For those who may want to
experiment, keep in mind that (1) all typed object support is only
available in Nightly builds and (2) all globals are contained behind a
TypedObject meta object (e.g., to create a new struct type, you
write:

var Point = new TypedObject.StructType({x: TypedObject.float32, ...})

Here are the major features that are not yet landed and their status:

1. Reference types (Bug 898359 -- reviewed, landing very soon)
2. Support for unsized typed arrays (Bug 922115 -- implemented, not yet 
reviewed)
3. Integrate typed objects and typed arrays (Bug 898356 -- not yet implemented)

Obviously #3 is the big one. I haven't had time to dig into it much
yet, there are a number of minor steps along the way, but I don't see
any fundamental difficulties. There are also various minor deviations
between the spec, the polyfill, and the native SM implementation that
will need to be ironed out.

 I don't know what the timing for integrating Typed Objects into the
 spec proper is, cc'ing Niko for that.

Dmitry and I were planning on beginning work on the actual spec
language soon. The goal is to advance the typed objects portion of the
spec -- which I believe is fairly separable from the rest -- as
quickly as possible, taking advantage of the new process.


Niko
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: es-discuss Digest, Vol 81, Issue 82

2013-11-18 Thread Mihai Niță
I would add my two cents here.


*Where the precise type of the data stream is known (e.g. Unicode
big-endian or Unicode little-endian), the BOM should not be used.*

From http://www.unicode.org/faq/utf_bom.html#bom1

And there is something in RFC 4627 that tells me JSON is not BOM-aware:
==

   JSON text SHALL be encoded in Unicode.  The default encoding is UTF-8.

   Since the first two characters of a JSON text will always be ASCII
characters [RFC0020], it is possible to determine whether an octet
stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking at
the pattern of nulls in the first four octets.

   00 00 00 xx  UTF-32BE
   00 xx 00 xx  UTF-16BE
   xx 00 00 00  UTF-32LE
   xx 00 xx 00  UTF-16LE
   xx xx xx xx  UTF-8

==
These patterns are not BOM, otherwise  they would be something like this:

   00 00 FE FF  UTF-32BE
   FE FF xx xx  UTF-16BE
   FF FE 00 00  UTF-32LE
   FF FE xx xx  UTF-16LE
   EF BB BF xx  UTF-8


It is kind of unfortunate that the precise type of the data stream is not
determined, and BOM is not accepted.

But a mechanism to decide the encoding is specified in the RFC, and it does
not include a BOM, in fact it prevents the use of BOM
(00 00 FE FF does not match the 00 00 00 xx pattern, for instance)

So, by the RFC, BOM is not expected / understood.

-

Although I am afraid that the RFC has a problem:
I think 日本語 (U+0022 U+65E5 U+672C U+8A9E U+0022) is valid JSON (same as
foo).

The first four bytes are:

   00 00 00 22  UTF-32BE
   00 22 E5 65  UTF-16BE
   22 00 00 00  UTF-32LE
   22 00 65 E5  UTF-16LE
   22 E6 97 A5  UTF-8

The UTF-16 bytes don't match the patterns in RFC, so UTF-16 streams would
(wrongly) be detected as UTF-8, if one strictly follows the RFC.

Regards,
Mihai


==
From: Bjoern Hoehrmann derhoe...@gmx.net
To: h...@inf.ed.ac.uk (Henry S. Thompson)
Cc: IETF Discussion i...@ietf.org, JSON WG j...@ietf.org, Martin J.
Dürst due...@it.aoyama.ac.jp, www-...@w3.org, es-discuss 
es-discuss@mozilla.org
Date: Mon, 18 Nov 2013 14:48:19 +0100
Subject: Re: BOMs
* Henry S. Thompson wrote:
I'm curious to know what level you're invoking the parser at.  As
implied by my previous post about the Python 'requests' package, it
handles application/json resources by stripping any initial BOM it
finds -- you can try this with

 import requests
 r=requests.get(http://www.ltg.ed.ac.uk/ov-test/b16le.json;)
 r.json()

The Perl code was

  perl -MJSON -MEncode -e
my $s = encode_utf8(chr 0xFEFF) . '[]'; JSON-new-decode($s)

The Python code was

  import json
  json.loads(u\uFEFF[].encode('utf-8'))

The Go code was

  package main

  import encoding/json
  import fmt

  func main() {
r := \uFEFF[]

var f interface{}
err := json.Unmarshal([]byte(r), f)

fmt.Println(err)
  }

In other words, always passing a UTF-8 encoded byte string to the byte
string parsing part of the JSON implementation. RFC 4627 is the only
specification for the application/json on-the-wire format and it does
not mention anything about Unicode signatures. Looking for certain byte
sequences at the beginning and treating them as a Unicode signature is
the same as looking for `/* ... */` and treating it as a comment.
--
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: es-discuss Digest, Vol 81, Issue 82

2013-11-18 Thread Bjoern Hoehrmann
* mn...@google.com wrote:
The first four bytes are:

   00 00 00 22  UTF-32BE
   00 22 E5 65  UTF-16BE
   22 00 00 00  UTF-32LE
   22 00 65 E5  UTF-16LE
   22 E6 97 A5  UTF-8

The UTF-16 bytes don't match the patterns in RFC, so UTF-16 streams would
(wrongly) be detected as UTF-8, if one strictly follows the RFC.

RFC 4627 does not allow string literals at the top level.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: es-discuss Digest, Vol 81, Issue 82

2013-11-18 Thread Mihai Niță
Sorry, I just took the first sentence here (second one added to the
confusion, not clarified it, but this is probably just me):

A JSON text is a sequence of tokens.  The set of tokens includes six
structural characters, strings, numbers, and three literal names.
A JSON text is a serialized object or array.


Anyway, this is good. It means that the RFC has no problem, it's just me :-)


But the conclusion that the RFC does not allow BOM is independent, and
I think it stands.


Mihai




On Mon, Nov 18, 2013 at 9:50 AM, Bjoern Hoehrmann derhoe...@gmx.net wrote:

 * mn...@google.com wrote:
 The first four bytes are:
 
00 00 00 22  UTF-32BE
00 22 E5 65  UTF-16BE
22 00 00 00  UTF-32LE
22 00 65 E5  UTF-16LE
22 E6 97 A5  UTF-8
 
 The UTF-16 bytes don't match the patterns in RFC, so UTF-16 streams would
 (wrongly) be detected as UTF-8, if one strictly follows the RFC.

 RFC 4627 does not allow string literals at the top level.
 --
 Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
 Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
 25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generator Arrow Functions

2013-11-18 Thread Brendan Eich

Ѓорѓи Ќосев wrote:
Its harder to scan whether this is a generator arrow function or a 
normal arrow function because the star is too far away:


someFunction(*(someArgument, anotherArgument) = {
... code ...
});

compared to this form, where its immediately obvious that this is not 
a regular function, just by looking at the composed symbol (arrow-star)


someFunction((someArgument, anotherArgument) =* {
... code ...
});


I buy it. This is what I'll propose next week as concrete syntax. It's a 
small point, but the rationale is the star goes after the first token 
that identifies the special form as a function form. For generator 
functions, that token is 'function'. For arrows, it is '='.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread Andrea Giammarchi
if it's about experimenting then `with(TypedObject) {}` would do :P

Any chance there will be a way to bypass most of the stuff for production?

Thanks


On Mon, Nov 18, 2013 at 6:58 AM, Niko Matsakis n...@alum.mit.edu wrote:

 On Sun, Nov 17, 2013 at 02:04:57PM +0100, Till Schneidereit wrote:
  The strawman at [1] is fairly close to what's going to end up in the
 spec,
  content-wise. Additionally, the implementation in SpiderMonkey is pretty
  complete by now, and there are lots of tests[2].

 Indeed, it's approaching full functionality. For those who may want to
 experiment, keep in mind that (1) all typed object support is only
 available in Nightly builds and (2) all globals are contained behind a
 TypedObject meta object (e.g., to create a new struct type, you
 write:

 var Point = new TypedObject.StructType({x: TypedObject.float32, ...})

 Here are the major features that are not yet landed and their status:

 1. Reference types (Bug 898359 -- reviewed, landing very soon)
 2. Support for unsized typed arrays (Bug 922115 -- implemented, not yet
 reviewed)
 3. Integrate typed objects and typed arrays (Bug 898356 -- not yet
 implemented)

 Obviously #3 is the big one. I haven't had time to dig into it much
 yet, there are a number of minor steps along the way, but I don't see
 any fundamental difficulties. There are also various minor deviations
 between the spec, the polyfill, and the native SM implementation that
 will need to be ironed out.

  I don't know what the timing for integrating Typed Objects into the
  spec proper is, cc'ing Niko for that.

 Dmitry and I were planning on beginning work on the actual spec
 language soon. The goal is to advance the typed objects portion of the
 spec -- which I believe is fairly separable from the rest -- as
 quickly as possible, taking advantage of the new process.


 Niko
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread Niko Matsakis
On Mon, Nov 18, 2013 at 11:46:30AM -0800, Andrea Giammarchi wrote:
 if it's about experimenting then `with(TypedObject) {}` would do :P
 
 Any chance there will be a way to bypass most of the stuff for production?

Sorry I don't understand the question.


Niko
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread Andrea Giammarchi
I believe one of the benefits on having Typed Objects is to have more
performant objects and collections of objects, as structs have been since
basically ever in C.

In this case, a full-specs polyfill, as the one pointed out in this thread,
is a very nice to have but it will inevitably slow down everything in
production compared to vanilla JS objects for every not ever-green
device/browser unable to optimize these objects/structs.

Accordingly, I wonder if there is any plan to make that polyfill able to
ignore everything in production and do just the most essential work in
order to not slow down already slow browsers in already slow devices
(Android 2.3 but also FirefoxOS and ZTE ...)

As example, in 2007 I've proposed a strict version of JavaScript
http://devpro.it/code/157.html

Explained here:
http://webreflection.blogspot.com/2007/05/javastrict-strict-type-arguments.html

And with a single flag to false, all checks were disappearing from
production in order to do not slow down things useful for developers only
(as a polyfill for StructType would be) but not for browsers unable to
optimize those references/objects/statically defined things

So my question was: is there any plan to be able to mark that polyfill in a
way that all checks are ignored and just essentials operations are granted
in order to trust the exposed behavior, without slowing down all non
compatible browsers with all that logic ?

Or better: is there any plan to offer a simplified version for production
that does not do everything as full-specs native would do?

I hope this is more clear, thanks for any answer.



On Mon, Nov 18, 2013 at 12:55 PM, Niko Matsakis n...@alum.mit.edu wrote:

 On Mon, Nov 18, 2013 at 11:46:30AM -0800, Andrea Giammarchi wrote:
  if it's about experimenting then `with(TypedObject) {}` would do :P
 
  Any chance there will be a way to bypass most of the stuff for
 production?

 Sorry I don't understand the question.


 Niko

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Objects / Binary Data Polyfills

2013-11-18 Thread Andrea Giammarchi
just to extra-simplify:

you can build=debug and build=release ... is there any plan to be able to
build=release that script? 'cause otherwise I'll spend some time creating a
script that does inline analysis and optimizations at runtime for slower
devices and/or production.

Thanks for further answers, if any.


On Mon, Nov 18, 2013 at 6:23 PM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 I believe one of the benefits on having Typed Objects is to have more
 performant objects and collections of objects, as structs have been since
 basically ever in C.

 In this case, a full-specs polyfill, as the one pointed out in this
 thread, is a very nice to have but it will inevitably slow down everything
 in production compared to vanilla JS objects for every not ever-green
 device/browser unable to optimize these objects/structs.

 Accordingly, I wonder if there is any plan to make that polyfill able to
 ignore everything in production and do just the most essential work in
 order to not slow down already slow browsers in already slow devices
 (Android 2.3 but also FirefoxOS and ZTE ...)

 As example, in 2007 I've proposed a strict version of JavaScript
 http://devpro.it/code/157.html

 Explained here:
 http://webreflection.blogspot.com/2007/05/javastrict-strict-type-arguments.html

 And with a single flag to false, all checks were disappearing from
 production in order to do not slow down things useful for developers only
 (as a polyfill for StructType would be) but not for browsers unable to
 optimize those references/objects/statically defined things

 So my question was: is there any plan to be able to mark that polyfill in
 a way that all checks are ignored and just essentials operations are
 granted in order to trust the exposed behavior, without slowing down all
 non compatible browsers with all that logic ?

 Or better: is there any plan to offer a simplified version for production
 that does not do everything as full-specs native would do?

 I hope this is more clear, thanks for any answer.



 On Mon, Nov 18, 2013 at 12:55 PM, Niko Matsakis n...@alum.mit.edu wrote:

 On Mon, Nov 18, 2013 at 11:46:30AM -0800, Andrea Giammarchi wrote:
  if it's about experimenting then `with(TypedObject) {}` would do :P
 
  Any chance there will be a way to bypass most of the stuff for
 production?

 Sorry I don't understand the question.


 Niko



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss