Re: typed arrays

2014-03-04 Thread C. Scott Ananian
While we're on the topic, let me bang the drum one last time for tweaking
some of the `Array` built-ins so that they can be applied to `TypedArray`s.
The fact that there are separate `Array.from` and `%TypedArray%.from`
methods in the spec (for example) makes me sad. IIRC all that is needed in
`Array.from` is to avoid writing the length field if the length of the
receiver is already correct. Similarly, if we defined a `GetLength` helper
that special-cased instances of `TypedArray`, the definitions of
`Array#forEach`, etc could simply say, `%TypedArray%#forEach` is strictly
equal to `Array#forEach`.

Of course implementors will specialize on the receiver type, but this
detail shouldn't be exposed in the spec IMO.
  --scott
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: typed arrays

2014-03-04 Thread Allen Wirfs-Brock

On Mar 4, 2014, at 8:17 AM, C. Scott Ananian wrote:

 While we're on the topic, let me bang the drum one last time for tweaking 
 some of the `Array` built-ins so that they can be applied to `TypedArray`s. 
 The fact that there are separate `Array.from` and `%TypedArray%.from` methods 
 in the spec (for example) makes me sad. IIRC all that is needed in 
 `Array.from` is to avoid writing the length field if the length of the 
 receiver is already correct. Similarly, if we defined a `GetLength` helper 
 that special-cased instances of `TypedArray`, the definitions of 
 `Array#forEach`, etc could simply say, `%TypedArray%#forEach` is strictly 
 equal to `Array#forEach`.
 
 Of course implementors will specialize on the receiver type, but this detail 
 shouldn't be exposed in the spec IMO.
   --scott
 
There's actually a big difference between %TypedArray%.from and Array.from.  
The typed array version must accurately determine the number of elements in the 
new array be instantiating it.  The regular array can pre-instantiate the new 
array and dynamically adjust its length. This affects a number of details of 
the algorithms. There are other cases where there are variations to the array 
method algorithms necessary to deal with the potential spareness of regular 
arrays.

This is the case for several of the array methods but for many of them, the 
same algorithmic description can be used for both Arrays and Typed Arrays.  In 
those cases, the ES6 spec. doesn't duplicate the algorithm but instead the 
Typed Array method specification simply reference the corresponding Array 
method specification. For example: 
http://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.foreach
 

There was one issue I had to make a decision on.  In the cases where the same 
algorithm was applicable to both Array and typed Array should a single (same 
identify) function be used in both Array.prototype and %TypedArray%.prototype 
or should a distinct function be used for each.  I choose the latter because it 
seems to allow for more implementation flexibility.  There are lots of ways 
that some of these algorithms might be optimized if you know you are dealing 
with a typed array. I wanted to give implementations that option and the most 
straightforward way to do so is to simply say that, for example, 
Array.prototype.foreach and %TypedArray%.prototype.forEach are distinct 
function objects.  That way they can have distinct implementations.  However, 
if an engine whats to share the same implementation level code between the two 
functions it could still do that.

Allen___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: typed arrays

2014-03-04 Thread C. Scott Ananian
On Tue, Mar 4, 2014 at 12:23 PM, Allen Wirfs-Brock
al...@wirfs-brock.com wrote:
 There's actually a big difference between %TypedArray%.from and Array.from.
 The typed array version must accurately determine the number of elements in
 the new array be instantiating it.  The regular array can pre-instantiate
 the new array and dynamically adjust its length. This affects a number of
 details of the algorithms. There are other cases where there are variations
 to the array method algorithms necessary to deal with the potential
 spareness of regular arrays.

This is an optimization that would be potentially useful in
`Array.from` as well in many cases.
The spec should allow this optimization for both `Array.from` *and*
`TypedArray.from` -- but we don't need to hardcode the optimization
into the spec.

That is, if the current `Array.from` spec has side-effects that
prohibit a copy to a helper first implementation, that should
probably be fixed.  As an example, if an exception is thrown halfway
through iteration through an iterable argument the intermediate writes
remain invisible if the receiver is `Array` or `%TypedArray%`, so the
copy to a helper optimization can be safely performed by the
implemention in these cases; it doesn't need to be written into the
spec.

 This is the case for several of the array methods but for many of them, the
 same algorithmic description can be used for both Arrays and Typed Arrays.
 In those cases, the ES6 spec. doesn't duplicate the algorithm but instead
 the Typed Array method specification simply reference the corresponding
 Array method specification. For example:
 http://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.foreach

And my suggestion here was that reference could be even shorter if you
just incorporated a `GetLength` helper that discriminated between
typed arrays and arrays.  Then you wouldn't need the weasel-word text,
you could just say the implementations are the same.

And, of course, for users it's one more place where they don't have to
worry about the difference between `TypedArray` and `Array`.  Almost
the same is a bug waiting to happen.

 There was one issue I had to make a decision on.  In the cases where the
 same algorithm was applicable to both Array and typed Array should a single
 (same identify) function be used in both Array.prototype and
 %TypedArray%.prototype or should a distinct function be used for each.  I
 choose the latter because it seems to allow for more implementation
 flexibility.  There are lots of ways that some of these algorithms might be
 optimized if you know you are dealing with a typed array. I wanted to give
 implementations that option and the most straightforward way to do so is to
 simply say that, for example, Array.prototype.foreach and
 %TypedArray%.prototype.forEach are distinct function objects.  That way they
 can have distinct implementations.  However, if an engine whats to share the
 same implementation level code between the two functions it could still do
 that.

I should let the implementors weigh in, but I believe this is exactly
backwards for modern JavaScript implementations.  The JIT is already
doing function optimizations based on receiver and argument types,
etc, so it's already the case that a single JavaScript Function
Object has a number of different implementation level codes.
There's no benefit to the implementor in differentiating the function
objects -- and doing so introduces a new and clever way to add bugs to
your code.  (Oh, sorry, `arg.map` isn't actually generic if you
happen to pass a typed array as `arg`).
  --scott
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: typed arrays

2014-03-04 Thread Boris Zbarsky

On 3/4/14 1:21 PM, C. Scott Ananian wrote:

There's no benefit to the implementor in differentiating the function
objects


There is: You don't have to have extra machinery to stash a copy of the 
function object somewhere when it first gets created so you can ensure 
that the same object ends up used both places.  You can simply create 
function objects independently for the two prototypes, which is much 
simpler.


-Boris
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


typed arrays

2014-03-03 Thread Mark Volkmann
Are typed arrays considered to be part of ES6? I see they are described in
the ES6 working draft, but when I find lists of ES6 features on the web, I
never see those included. I see they are also described in a separate
specification here: http://www.khronos.org/registry/typedarray/specs/latest/
.

-- 
R. Mark Volkmann
Object Computing, Inc.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: typed arrays

2014-03-03 Thread Allen Wirfs-Brock

On Mar 3, 2014, at 4:50 PM, Mark Volkmann wrote:

 Are typed arrays considered to be part of ES6? I see they are described in 
 the ES6 working draft, but when I find lists of ES6 features on the web, I 
 never see those included. I see they are also described in a separate 
 specification here: http://www.khronos.org/registry/typedarray/specs/latest/.

Yes, they are part of ES6 and the Typed Array specification in the ES6 spec. is 
intended to supersede the Khronos spec. 

Allen

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Extensible typed arrays use case in the wild

2013-11-27 Thread Domenic Denicola
I found this relevant to some previous discussions:

https://github.com/feross/native-buffer-browserify/commit/18c6784277e25db01ee145e5dfaaf23bb5b311fc

The native-buffer-browserify is an attempt to provide a typed array-based 
version of Node.js's Buffer interface [1]. (Buffer is something that was 
created before TypedArrays existed.) In Firefox, which has non-extensible typed 
arrays, this cannot be done, and so the author had to resort to using a proxy 
that forwards to the underlying typed array. In other engines, it worked 
straightforwardly.

I believe this use case could also be solved by ES6-style subclassing support 
on TypedArrays, but that doesn't appear to work in Firefox either, from my 
tests.

[1]: http://nodejs.org/docs/latest/api/buffer.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Extensible typed arrays use case in the wild

2013-11-27 Thread Till Schneidereit
Hey Domenic,

thanks for bringing this up. IIUC, extensible typed arrays have been
decided on, so we will implement them. CCing Niko, who is working on typed
arrays and typed objects.


On Wed, Nov 27, 2013 at 2:56 PM, Domenic Denicola 
dome...@domenicdenicola.com wrote:

 I found this relevant to some previous discussions:


 https://github.com/feross/native-buffer-browserify/commit/18c6784277e25db01ee145e5dfaaf23bb5b311fc

 The native-buffer-browserify is an attempt to provide a typed
 array-based version of Node.js's Buffer interface [1]. (Buffer is something
 that was created before TypedArrays existed.) In Firefox, which has
 non-extensible typed arrays, this cannot be done, and so the author had to
 resort to using a proxy that forwards to the underlying typed array. In
 other engines, it worked straightforwardly.

 I believe this use case could also be solved by ES6-style subclassing
 support on TypedArrays, but that doesn't appear to work in Firefox either,
 from my tests.

 [1]: http://nodejs.org/docs/latest/api/buffer.html
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-05 Thread Steve Fink
On 09/04/2013 02:41 PM, Brendan Eich wrote:
 But lost expandos due to loss of identity are an especially nasty
 kind of bug to find. Is there any use-case here? We've never had a bug
 report asking us to make SpiderMonkey's typed arrays extensible, AFAIK.

We have: https://bugzilla.mozilla.org/show_bug.cgi?id=695438

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-05 Thread Steve Fink
On 09/04/2013 04:15 PM, Filip Pizlo wrote:

 On Sep 4, 2013, at 3:09 PM, Brendan Eich bren...@mozilla.com
 mailto:bren...@mozilla.com wrote:

 Filip Pizlo mailto:fpi...@apple.com
 September 4, 2013 12:34 PM
 My point is that having custom properties, or not, doesn't change
 the overhead for the existing typed array spec and hence has no
 effect on small arrays.  The reasons for this include:

 - Typed arrays already have to be objects, and hence have a
 well-defined behavior on '=='.

 - Typed arrays already have to be able to tell you that they are in
 fact typed arrays, since JS doesn't have static typing.

 - Typed arrays already have prototypes, and those are observable
 regardless of expandability.  A typed array from one global object
 will have a different prototype than a typed array from a different
 global object.  Or am I misunderstanding the spec?

 - Typed arrays already have to know about their buffer.

 - Typed arrays already have to know about their offset into the
 buffer.  Or, more likely, they have to have a second pointer that
 points directly at the base from which they are indexed.

 - Typed arrays already have to know their length.

 You're not proposing changing these aspects of typed arrays, right?

 Of course not, but for very small fixed length arrays whose .buffer
 is never accessed, an implementation might optimize harder.

 As I said, of course you can do this, and one way you could try
 harder is to put the buffer pointer in a side table.  The side table
 maps array object pointers to their buffers, and you only make an
 entry in this table if .buffer is mentioned.

 But if we believe that this is a sensible thing for a VM to do - and
 of course it is! - then the same thing can be done for the custom
 property storage pointer.

 It's hard for me to say no, Filip's analysis shows that's never
 worthwhile, for all time.

 The super short message is this: so long as an object obeys object
 identity on '==' then you can have free if unused, suboptimal if
 you use them custom properties by using a weak map on the side.
  This is true of typed arrays and it would be true of any other
 object that does object-style ==.  If you allocate such an object
 and never add a custom property then the weak map will never have an
 entry for it; but if you put custom properties in the object then
 the map will have things in it.  But with typed arrays you can do
 even better as my previous message suggests: so long as an object
 has a seldom-touched field and you're willing to eat an extra
 indirection or an extra branch on that field, you can have free if
 unused, still pretty good if you use them custom properties by
 displacing that field.  Typed arrays have both of these properties
 right now and so expandability is a free lunch.

 The last sentence makes a for-all assertion I don't think
 implementations must be constrained by.

 How so?  It is true that some VM implementations will be better than
 others.  But ultimately every VM can implement every optimization that
 every other VM has; in fact my impression is that this is exactly what
 is happening as we speak.

 So, it doesn't make much sense to make language design decisions
 because it might make some implementor's life easier right now.  If
 you could argue that something will /never/ be efficient if we add
 feature X, then that might be an interesting argument.  But as soon as
 we identify one sensible optimization strategy for making something
 free, I would tend to think that this is sufficient to conclude that
 the feature is free and there is no need to constrain it.  If we don't
 do this then we risk adding cargo-cult performance features that
 rapidly become obsolete.

This general argument bothers me slightly, because it assumes no
opportunity cost in making something free(ish). Even if you can
demonstrate that allowing X can be made fast, it isn't a complete
argument for allowing X, since disallowing X might enable some other
optimization or feature or semantic simplification.  Such demonstrations
are still useful, since they can shoot down objections based solely on
performance.

But maybe I'm misinterpreting ...sufficient to conclude...that there is
no need to constrain [the feature]. Perhaps you only meant that there
is no need to constrain it *for reasons of performance*? If so, then you
only need consider the opportunity cost of other optimizations.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-05 Thread Filip Pizlo

On Sep 4, 2013, at 11:17 PM, Steve Fink sph...@gmail.com wrote:

 On 09/04/2013 04:15 PM, Filip Pizlo wrote:
 
 On Sep 4, 2013, at 3:09 PM, Brendan Eich bren...@mozilla.com wrote:
 
 Filip Pizlo mailto:fpi...@apple.com
 September 4, 2013 12:34 PM
 My point is that having custom properties, or not, doesn't change the 
 overhead for the existing typed array spec and hence has no effect on 
 small arrays.  The reasons for this include:
 
 - Typed arrays already have to be objects, and hence have a well-defined 
 behavior on '=='.
 
 - Typed arrays already have to be able to tell you that they are in fact 
 typed arrays, since JS doesn't have static typing.
 
 - Typed arrays already have prototypes, and those are observable 
 regardless of expandability.  A typed array from one global object will 
 have a different prototype than a typed array from a different global 
 object.  Or am I misunderstanding the spec?
 
 - Typed arrays already have to know about their buffer.
 
 - Typed arrays already have to know about their offset into the buffer.  
 Or, more likely, they have to have a second pointer that points directly 
 at the base from which they are indexed.
 
 - Typed arrays already have to know their length.
 
 You're not proposing changing these aspects of typed arrays, right?
 
 Of course not, but for very small fixed length arrays whose .buffer is 
 never accessed, an implementation might optimize harder.
 
 As I said, of course you can do this, and one way you could try harder is 
 to put the buffer pointer in a side table.  The side table maps array object 
 pointers to their buffers, and you only make an entry in this table if 
 .buffer is mentioned.
 
 But if we believe that this is a sensible thing for a VM to do - and of 
 course it is! - then the same thing can be done for the custom property 
 storage pointer.
 
 It's hard for me to say no, Filip's analysis shows that's never 
 worthwhile, for all time.
 
 The super short message is this: so long as an object obeys object 
 identity on '==' then you can have free if unused, suboptimal if you use 
 them custom properties by using a weak map on the side.  This is true of 
 typed arrays and it would be true of any other object that does 
 object-style ==.  If you allocate such an object and never add a custom 
 property then the weak map will never have an entry for it; but if you put 
 custom properties in the object then the map will have things in it.  But 
 with typed arrays you can do even better as my previous message suggests: 
 so long as an object has a seldom-touched field and you're willing to eat 
 an extra indirection or an extra branch on that field, you can have free 
 if unused, still pretty good if you use them custom properties by 
 displacing that field.  Typed arrays have both of these properties right 
 now and so expandability is a free lunch.
 
 The last sentence makes a for-all assertion I don't think implementations 
 must be constrained by.
 
 How so?  It is true that some VM implementations will be better than others. 
  But ultimately every VM can implement every optimization that every other 
 VM has; in fact my impression is that this is exactly what is happening as 
 we speak.
 
 So, it doesn't make much sense to make language design decisions because it 
 might make some implementor's life easier right now.  If you could argue 
 that something will never be efficient if we add feature X, then that might 
 be an interesting argument.  But as soon as we identify one sensible 
 optimization strategy for making something free, I would tend to think that 
 this is sufficient to conclude that the feature is free and there is no need 
 to constrain it.  If we don't do this then we risk adding cargo-cult 
 performance features that rapidly become obsolete.
 
 This general argument bothers me slightly, because it assumes no opportunity 
 cost in making something free(ish). Even if you can demonstrate that allowing 
 X can be made fast, it isn't a complete argument for allowing X, since 
 disallowing X might enable some other optimization or feature or semantic 
 simplification.  Such demonstrations are still useful, since they can shoot 
 down objections based solely on performance.
 
 But maybe I'm misinterpreting ...sufficient to conclude...that there is no 
 need to constrain [the feature]. Perhaps you only meant that there is no 
 need to constrain it *for reasons of performance*? If so, then you only need 
 consider the opportunity cost of other optimizations.

Yeah, I might have overstated this.  My gut intuition is that performance 
shouldn't be a great reason for deciding PL features to begin with.  But in the 
cases where you have the urge to add or remove a feature solely because of 
performance, I think that a sufficient counterargument is to show that there 
exists some sensible optimization strategy that obviates the feature (or its 
removal).  And yes, opportunity cost ought to be considered.  If you can make

Re: Non-extensibility of Typed Arrays

2013-09-05 Thread Andreas Rossberg
On 5 September 2013 03:11, Niko Matsakis n...@alum.mit.edu wrote:
 On Wed, Sep 04, 2013 at 02:41:24PM -0700, Brendan Eich wrote:
 Both adds up to two, not one, so the counterargument is odd beats
 odder, and prohibiting expandos keeps the oddness down to == and
 nothing else.

 I just want to agree with this. I think the behavior of `==` is
 unfortunate, but much more of a corner case than expando properties
 disappearing and reappearing willy nilly as users switch between
 otherwise identical views on the same data.

 I feel like `==` can be explained by the fact that these are distinct
 objects pointing at the same underyling buffer. People get aliasing.
 But the fact that expando properties would live *on the wrapper* and
 not on the *underlying data* is quite surprising and counterintuitive
 -- some of the data (the elements) are aliased, but others are not.

Maybe it actually _is_ worth considering a different equality
semantics for structs and typed arrays. In essence, they are a kind of
super-fat pointer, and we could give them the usual notion of (fat)
pointer equality. That is, two objects are equal if they are
equivalent views to the same backing store. It would make them value
types, more or less.

As an implementor, I don't like this idea too much :), but from a user
perspective it would probably be saner.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-05 Thread Dmitry Lomov
On Thu, Sep 5, 2013 at 4:29 AM, Filip Pizlo fpi...@apple.com wrote:



 This is where I return to the objectyness point: typed arrays are already
 spec'd to have a bunch of heavy reference-to-object behavior.  So making
 then expandable is no big deal.  And making then non-expandable means that
 we'll now live in a weirdo world where we have four different concepts of
 what it means to be a value:

 A) Full blown reference objects that you can do weird things to, like add
 properties and change __proto__, etc.  You can also make one non-extensible
 at your discretion, which fits into the bat-poop crazy you can do
 anything philosophy of full blown objects.  And that's great - that's the
 heart of the language, and I happen to enjoy it.

 B) Object types that are always non-extensible but otherwise still objecty
 - they have a prototype that is observable, they reveal their identity via
 ==, and you can actually inject stuff into them by modifying the
 appropriate Object.prototype.

 C) Values with whatever value type semantics we come up with in the future.

 D) Primitives.

 Now, I hope that we could get C and D to be as close as possible to each
 other.  But that still leaves three different behaviors.  This introduces a
 learning curve.  That's why (B) offends me.  It's subtly different from (A)
 and clearly different from either (C) or (D).

 Now, we actually also have a totally alternate behavior, used by binary
 data.  And my argument there is that I wouldn't get too offended by binary
 data acting weird, because the very notion of exposing binary data is weird
 to begin with.  I expect it to be used only for special graphicsy stuff and
 not for general-purpose value types for normal JS programs.  So it's OK
 to me if binary data is both weird and inconsistent with everything else.
  And no, I still don't view typed arrays as being part of binary data -
 it already appears to be the case that typed arrays have different buffer
 behavior to the struct types.  So they're just different.  And that's fine.


You are underestimating the diversity of species in (A). You *can* create
full blown reference objects that are non-extensible in plain JavaScript
today: just call Object.preventExtensions(this) in constructor!
So (B) *as you define it* is a subset of (A).

Note however that typed arrays as implemented by many vendors today are
neither in (A) nor in (B) - you can extend typed array with _named_
properties, but you cannot extend typed array with _indexed_ properties.
There is no Object.preventIndexedExtensions, so this sort of
(non-)extensibility is indeed a weird case. If the goal is to reduce the
zoo of object kinds, making typed arrays completely non-extensibie puts
them firmly in (A): they are just full-blown objects that happen to be born
non-extensible - something that is already totally possible in the language
today.

As to binary data acting weird: it might appear weird, but this is again
the kind of weirdness that is possible in JavaScript *today*, as
evidenced by Dave Herman's and mine polyfill:
https://github.com/dherman/structs.js.
(almost; there are subtle differences re whether struct fields are data
properties or getters/setters, but the key behaviors of assignments and
equality are modelled accurately).

Dmitry
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-05 Thread Niko Matsakis
On Thu, Sep 05, 2013 at 09:15:11AM +0200, Andreas Rossberg wrote:
 Maybe it actually _is_ worth considering a different equality
 semantics for structs and typed arrays. In essence, they are a kind of
 super-fat pointer, and we could give them the usual notion of (fat)
 pointer equality. That is, two objects are equal if they are
 equivalent views to the same backing store. It would make them value
 types, more or less.
 
 As an implementor, I don't like this idea too much :), but from a user
 perspective it would probably be saner.

Perhaps. Note that arrays can still point at overlapping memory
without being *equal*. So the same basic guarantees hold as today:

  `==` = aliasing
  `!=` = nothing in particular


Niko
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-05 Thread Andreas Rossberg
On 5 September 2013 12:24, Niko Matsakis n...@alum.mit.edu wrote:
 On Thu, Sep 05, 2013 at 09:15:11AM +0200, Andreas Rossberg wrote:
 Maybe it actually _is_ worth considering a different equality
 semantics for structs and typed arrays. In essence, they are a kind of
 super-fat pointer, and we could give them the usual notion of (fat)
 pointer equality. That is, two objects are equal if they are
 equivalent views to the same backing store. It would make them value
 types, more or less.

 As an implementor, I don't like this idea too much :), but from a user
 perspective it would probably be saner.

 Perhaps. Note that arrays can still point at overlapping memory
 without being *equal*. So the same basic guarantees hold as today:

   `==` = aliasing
   `!=` = nothing in particular

Yes, sure. The same holds for plain pointer equality in C, though
(thanks to primitive types of different size, unions, array semantics,
and other stuff). Pointer comparison is not for detecting aliasing,
unless you know what you are doing.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Dmitry Lomov
(sorry for not getting on this thread earlier - I was off the grid for a
bit)

I think we should consider fixed-length ArrayTypes in this discussion. They
can be parts of structs.
Consider
var A = new ArrayType(uint8, 10);
var S = new Struct({a : A});
var a = new A();
var s = new S();
a[0] = 10;
a.foo = foo;
s.a = a;
Assignment to a struct field is a copy, essentially. Of course, s.a[0] is
now 10. But does s.a.foo exist? In the current semantics, there is no place
to store it, because a field 'a' of struct 'S' is just a storage designator
- there is no place in struct s to store the expando properties of fields
and fields of fields and fields of fields of fields

Therefore in current semantics fixed-length ArrayTypes, just like
StructTypes, are either non-expandable, or have to lose their expanded
properties on assignments -  big surprise for the user!

Now of course variable-sized ArrayTypes do not suffer from this issue, but
one could argue for consistency with fixed-sized ArrayTypes.

Dmitry



On Fri, Aug 30, 2013 at 9:18 PM, Brendan Eich bren...@mozilla.com wrote:

 David Herman wrote:

   Typed Arrays are a different beast that already exist in the real
 world.  I don't see any need for consistency between Typed Arrays and
 struct types. Consistency between Typed Arrays and Array is more important.


 Mostly agreed, except I'd just refine that to say there's no need for
 consistency*in this dimension*. It would be a shame if typed arrays weren't
 generalized by the typed objects API in general, and I worked hard to make
 the pieces fit together. That nuance aside,


 I think you are too kind :-|.

 Allen, the point about typed arrays being different from structs because
 some implementations make the former extensible and the latter do not exist
 in any implementation yet is a just-so story, half of which is
 hypothetical! I could just as well argue from Firefox's non-extensible
 precedent if I wanted to.

 The better argument is one that accounts for *why* structs are not
 extensible and how typed arrays differ, if they do differ, by design -- not
 based on implementation in some but not all browsers.


the fact that, in practice, arrays are patched with additional
 properties (in fact, IIRC the ES6 template strings API adds properties to
 arrays) suggests that non-extensibility would be a real incompatibility
 between arrays and typed arrays.


 This only goes so far, since one could try to argue from objects to
 structs in the same way. What's the difference-in-kind reason? I can give
 answers but I'm looking for others' answers.


So I'm cool with making typed arrays -- but not structs -- extensible.


 It's ok if we decide this, but let's have a not-just-so story beyond the
 array-like use-case one -- it's good but without something else, it could
 reason from objects to structs, but no one here wants extensible structs.
 (Right?)

 /be

 __**_
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/**listinfo/es-discusshttps://mail.mozilla.org/listinfo/es-discuss




-- 
Google Germany GmbH
*Dienerstr. 12, 80331 München., DE *
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Andreas Rossberg
On 4 September 2013 10:23, Dmitry Lomov dslo...@google.com wrote:
 I think we should consider fixed-length ArrayTypes in this discussion. They
 can be parts of structs.
 Consider
 var A = new ArrayType(uint8, 10);
 var S = new Struct({a : A});
 var a = new A();
 var s = new S();
 a[0] = 10;
 a.foo = foo;
 s.a = a;
 Assignment to a struct field is a copy, essentially. Of course, s.a[0] is
 now 10. But does s.a.foo exist? In the current semantics, there is no place
 to store it, because a field 'a' of struct 'S' is just a storage designator
 - there is no place in struct s to store the expando properties of fields
 and fields of fields and fields of fields of fields

 Therefore in current semantics fixed-length ArrayTypes, just like
 StructTypes, are either non-expandable, or have to lose their expanded
 properties on assignments -  big surprise for the user!

 Now of course variable-sized ArrayTypes do not suffer from this issue, but
 one could argue for consistency with fixed-sized ArrayTypes.

I was about to make the same point. :)

As part of binary data, typed arrays are implicitly constructed on
the fly as views on a backing store. Any notion of identity -- which
is the prerequisite for state -- is not particularly meaningful in
this setting. Also, it is preferable to make them as lightweight as
possible.

As for other typed arrays, the difference is subtle, and I'd rather go
for consistency.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Filip Pizlo


 On Sep 4, 2013, at 3:05 AM, Andreas Rossberg rossb...@google.com wrote:
 
 On 4 September 2013 10:23, Dmitry Lomov dslo...@google.com wrote:
 I think we should consider fixed-length ArrayTypes in this discussion. They
 can be parts of structs.
 Consider
var A = new ArrayType(uint8, 10);
var S = new Struct({a : A});
var a = new A();
var s = new S();
a[0] = 10;
a.foo = foo;
s.a = a;
 Assignment to a struct field is a copy, essentially. Of course, s.a[0] is
 now 10. But does s.a.foo exist? In the current semantics, there is no place
 to store it, because a field 'a' of struct 'S' is just a storage designator
 - there is no place in struct s to store the expando properties of fields
 and fields of fields and fields of fields of fields
 
 Therefore in current semantics fixed-length ArrayTypes, just like
 StructTypes, are either non-expandable, or have to lose their expanded
 properties on assignments -  big surprise for the user!
 
 Now of course variable-sized ArrayTypes do not suffer from this issue, but
 one could argue for consistency with fixed-sized ArrayTypes.
 
 I was about to make the same point. :)
 
 As part of binary data, typed arrays are implicitly constructed on
 the fly as views on a backing store. Any notion of identity -- which
 is the prerequisite for state -- is not particularly meaningful in
 this setting.

Are you proposing changing how == and === work for typed arrays?  If not then 
this whole argument is moot. 

 Also, it is preferable to make them as lightweight as
 possible.

See my previous mail. You gain zero space and zero performance from making 
typed arrays non extensible. 

 
 As for other typed arrays, the difference is subtle, and I'd rather go
 for consistency.
 
 /Andreas
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Filip Pizlo

On Sep 4, 2013, at 7:55 AM, Andreas Rossberg rossb...@google.com wrote:

 On 4 September 2013 16:44, Filip Pizlo fpi...@apple.com wrote:
 On Sep 4, 2013, at 3:05 AM, Andreas Rossberg rossb...@google.com wrote:
 As part of binary data, typed arrays are implicitly constructed on
 the fly as views on a backing store. Any notion of identity -- which
 is the prerequisite for state -- is not particularly meaningful in
 this setting.
 
 Are you proposing changing how == and === work for typed arrays?  If not 
 then this whole argument is moot.
 
 No, they are just rather useless operations on data views. That
 doesn't make the argument moot.

The point is that as soon as you're using the copy '=' on binary data fields, 
you're already losing an observable notion of object identity.  The '=' here is 
already unlike the '=' operator for languages that have true value types - in 
those languages you wouldn't be able to observe if you got the *same* typed 
array or a different one but with the same underlying data.  In JS you will be 
able to observe this with '==' and '==='.  Hence, being able to also observe 
that you got a different one because you lost some meta-data (i.e. custom named 
properties) doesn't change the fact that the quirky semantics were already 
observable to the user.

 
 Also, it is preferable to make them as lightweight as
 possible.
 
 See my previous mail. You gain zero space and zero performance from making 
 typed arrays non extensible.
 
 I think you are jumping to conclusions. You can very well optimize the
 representation of typed arrays if they don't have user-defined
 properties. Whether that's worth it I can't tell without experiments.

I don't think this is a matter of opinion.  There is state that typed arrays 
are required to store but that is not accessed on the most critical of hot 
paths, which naturally allows us to play displaced pointer tricks.

It would also be useful, if you want to argue this point, if you replied to my 
previous discussion of why there is no performance difference between 
expandable and non-expandable typed arrays.  I'll copy that here in case you 
missed it:

A typed array *must* know about the following bits of information:

T: Its own type.
B: A base pointer (not the buffer but the thing you index off of).
L: Its length.

But that only works if it owns its buffer - that is it was allocated using for 
example new Int8Array(100) and you never used the .buffer property.  So in 
practice you also need:

R: Reserved space for a pointer to a buffer.

Now observe that 'R' can be reused for either a buffer pointer or a pointer to 
overflow storage for named properties.  If you have both a buffer and overflow 
storage, you can save room in the overflow storage for the buffer pointer (i.e. 
displace the buffer pointer into the property storage).  We play a slightly 
less ambitious trick, where R either points to overflow storage or NULL.  Most 
typed arrays don't have a .buffer, but once they get one, we allocate overflow 
storage and reserve a slot in there for the buffer pointer.  So you pay *one 
more* word of overhead for typed arrays with buffers even if they don't have 
named properties.  I think that's probably good enough - I mean, in that case, 
you have a freaking buffer object as well so you're not exactly conserving 
memory.

But, using R as a direct pointer to the buffer would be a simple hack if we 
really felt like saving one word when you also already have a separate buffer 
object.

I could sort of imagine going further and using T as a displaced pointer and 
saving an extra word, but that might make type checks more expensive, sometimes.

So lets do the math, on both 32-bit and 64-bit (where 64-bit implies 64-bit 
pointers), to see how big this would be.

32-bit:

T = 4 bytes, B = 4 bytes, L = 4 bytes, R = 4 bytes.  So, you get 16 bytes of 
overhead for most typed arrays, and 20 if you need to use R as an overflow 
storage pointer and displace the buffer pointer into the overflow storage.

64-bit:

T = 8 bytes, B = 8 bytes, L = 4 bytes, R = 8 bytes.  This implies you have 4 
bytes to spare if you want objects 8-byte aligned (we do); we use this for some 
extra bookkeeping.  So you get 32 bytes of overhead for most typed arrays, and 
40 if you need to use R as an overflow storage pointer and displace the buffer 
pointer into the overflow storage.

As far as I can tell, this object model compresses typed arrays about as much 
as they could be compressed while also allowing them to be extensible.  The 
downside is that you pay a small penalty for typed arrays that have an active 
buffer, in the case that you either accessed the .buffer property or you 
constructed the typed array using a constructor that takes a buffer as an 
argument.

So, how big are your non-expanddable typed arrays, and what do they look like?  
If they're not smaller than 16 bytes in the common case with 32-bit pointers, 
or 32 bytes in the common case with 64-bit pointers

Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Andreas Rossberg
On 4 September 2013 16:44, Filip Pizlo fpi...@apple.com wrote:
 On Sep 4, 2013, at 3:05 AM, Andreas Rossberg rossb...@google.com wrote:
 As part of binary data, typed arrays are implicitly constructed on
 the fly as views on a backing store. Any notion of identity -- which
 is the prerequisite for state -- is not particularly meaningful in
 this setting.

 Are you proposing changing how == and === work for typed arrays?  If not then 
 this whole argument is moot.

No, they are just rather useless operations on data views. That
doesn't make the argument moot.

 Also, it is preferable to make them as lightweight as
 possible.

 See my previous mail. You gain zero space and zero performance from making 
 typed arrays non extensible.

I think you are jumping to conclusions. You can very well optimize the
representation of typed arrays if they don't have user-defined
properties. Whether that's worth it I can't tell without experiments.
Admittedly, it's a minor point.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Andreas Rossberg
On 4 September 2013 17:11, Filip Pizlo fpi...@apple.com wrote:
 On Sep 4, 2013, at 7:55 AM, Andreas Rossberg rossb...@google.com wrote:
 On 4 September 2013 16:44, Filip Pizlo fpi...@apple.com wrote:
 On Sep 4, 2013, at 3:05 AM, Andreas Rossberg rossb...@google.com wrote:
 As part of binary data, typed arrays are implicitly constructed on
 the fly as views on a backing store. Any notion of identity -- which
 is the prerequisite for state -- is not particularly meaningful in
 this setting.

 Are you proposing changing how == and === work for typed arrays?  If not
 then this whole argument is moot.

 No, they are just rather useless operations on data views. That
 doesn't make the argument moot.

 The point is that as soon as you're using the copy '=' on binary data
 fields, you're already losing an observable notion of object identity.  The
 '=' here is already unlike the '=' operator for languages that have true
 value types - in those languages you wouldn't be able to observe if you got
 the *same* typed array or a different one but with the same underlying data.
 In JS you will be able to observe this with '==' and '==='.  Hence, being
 able to also observe that you got a different one because you lost some
 meta-data (i.e. custom named properties) doesn't change the fact that the
 quirky semantics were already observable to the user.

I didn't say it's unobservable -- every twist in the gut is observable
in JavaScript. I said it's rather meaningless. That is, from a
practical perspective, I'd rather not recommend relying on it, unless
you are up for subtle and brittle code.


 As far as I can tell, this object model compresses typed arrays about as
 much as they could be compressed while also allowing them to be extensible.
 The downside is that you pay a small penalty for typed arrays that have an
 active buffer, in the case that you either accessed the .buffer property
 or you constructed the typed array using a constructor that takes a buffer
 as an argument.

I really don't feel like getting into this argument -- as I said it's
a minor point. Just note that the optimisation you suggest might not
be worth it in every VM (i.e., there could be a substantial impedance
mismatch), and moreover, that the above case might be not-so-uncommon.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Filip Pizlo


 On Sep 4, 2013, at 9:03 AM, Andreas Rossberg rossb...@google.com wrote:
 
 On 4 September 2013 17:11, Filip Pizlo fpi...@apple.com wrote:
 On Sep 4, 2013, at 7:55 AM, Andreas Rossberg rossb...@google.com wrote:
 On 4 September 2013 16:44, Filip Pizlo fpi...@apple.com wrote:
 On Sep 4, 2013, at 3:05 AM, Andreas Rossberg rossb...@google.com wrote:
 As part of binary data, typed arrays are implicitly constructed on
 the fly as views on a backing store. Any notion of identity -- which
 is the prerequisite for state -- is not particularly meaningful in
 this setting.
 
 Are you proposing changing how == and === work for typed arrays?  If not
 then this whole argument is moot.
 
 No, they are just rather useless operations on data views. That
 doesn't make the argument moot.
 
 The point is that as soon as you're using the copy '=' on binary data
 fields, you're already losing an observable notion of object identity.  The
 '=' here is already unlike the '=' operator for languages that have true
 value types - in those languages you wouldn't be able to observe if you got
 the *same* typed array or a different one but with the same underlying data.
 In JS you will be able to observe this with '==' and '==='.  Hence, being
 able to also observe that you got a different one because you lost some
 meta-data (i.e. custom named properties) doesn't change the fact that the
 quirky semantics were already observable to the user.
 
 I didn't say it's unobservable -- every twist in the gut is observable
 in JavaScript. I said it's rather meaningless. That is, from a
 practical perspective, I'd rather not recommend relying on it, unless
 you are up for subtle and brittle code.

Are you saying that users shouldn't rely on == on objects?  My concern here is 
that binary data, which is a rather obscure addition to the language, doesn't 
break mainstream uses of the language. Disallowing custom properties on typed 
array objects just because binary data assignments lose object identity is 
silly. You're already losing object identity and it's already observable. 
Custom properties have nothing to do with this. 

-F

 
 
 As far as I can tell, this object model compresses typed arrays about as
 much as they could be compressed while also allowing them to be extensible.
 The downside is that you pay a small penalty for typed arrays that have an
 active buffer, in the case that you either accessed the .buffer property
 or you constructed the typed array using a constructor that takes a buffer
 as an argument.
 
 I really don't feel like getting into this argument -- as I said it's
 a minor point. Just note that the optimisation you suggest might not
 be worth it in every VM (i.e., there could be a substantial impedance
 mismatch), and moreover, that the above case might be not-so-uncommon.
 
 /Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Brendan Eich

Filip Pizlo wrote:
So, how big are your non-expanddable typed arrays, and what do they 
look like?  If they're not smaller than 16 bytes in the common case 
with 32-bit pointers, or 32 bytes in the common case with 64-bit 
pointers, then there is no performance argument in favor of getting 
rid of expandable properties.


I like your analysis, it helps greatly to be quantitative and to talk 
concretely about implementation trade-offs. However, I don't think it 
proves as much as you assert.


Suppose I want (as IBM did for years, and may still) to implement IEEE 
754r decimal in JS, with minimal overhead. I would need 128 bits of flat 
storage, no variable length, no .buffer or aliasing, and *no expandos*. 
Can binary data help me do that? If so, how small can the thing be? I'd 
like a multiple of 16 bytes, but on 64-bit targets that does not leave 
enough room for TBLR and we don't really need BLR anyway.


If we can't implement efficient-enough 754r decimal using binary data, 
that's sad. Not the end of the world, and it doesn't prove a whole lot 
about anything (perhaps we'll figure out something next year). But the 
small, fixed-size array case exists (think of Float32Array 4-vectors, 
homogeneous coordinates). It seems to me you are giving this use-case 
short shrift.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Niko Matsakis
I think Filip is right that given sufficient cleverness extensible
properties for typed objects can be implemented efficiently. The real
question is what the behavior SHOULD be. As others have pointed out,
we are not forced to support extensible properties for web compat
reasons.

I also think it is very important and useful to have typed objects be
a generalization of typed arrays. I suspect nobody wants an almost
but not quite the same set of array types. It'd be my preference that
(eventually) the specification for typed arrays can just be var
Uint16Array = new ArrayType(uint16), which I believe is currently
plausible.

In light of this consideration, that means that adding exensible
properties to typed arrays means adding extensible properties to all
typed objects that are arrays (that is, instances of some type
defined by `new ArrayType()`).

As Dmitry pointed out, extensible properties is only possible for
top-level objects. I think this results in a surprising and
non-composable spec.

The surprising behavior isn't limited to the copying example that
Dmitry gave. Another problem is that instances of array types that are
found embedded in other structures don't have the full capabilities of
top-level instances. Without extensible properties, it is true that
if I have a function that is given a typed object (of any kind, array
or struct) and uses it, I can also provide it with an instance of that
same type that is a part of a bigger structure.

For example:

function doSomething(anArray) {
anArray[0] = anArray[1];
}

// Invoke doSomething with top-level array
var TwoUint8s = new ArrayType(uint8, 2);
doSomething(new TwoUint8s());

// Invoke doSomething with array that is
// embedded within a struct:
var MyStruct = StructType({a: TwoUint8s});
var instance = new MyStruct();
doSomething(instance.a);

But this no longer works if `doSomething` makes use of extensible
properties:

function doSomething(anArray) {
anArray[0] = anArray[1];
anArray.foo = anArray.bar;
}

Now the second use case doesn't work.

To me, it seems a shame to trade a simple story (typed objects let
you define the layout and fields of an object, full stop) for a more
complex, non-composable one (...except for extra fields on arrays,
which only work some of the time).



Niko


On Wed, Sep 04, 2013 at 08:11:14AM -0700, Filip Pizlo wrote:
 
 On Sep 4, 2013, at 7:55 AM, Andreas Rossberg rossb...@google.com wrote:
 
  On 4 September 2013 16:44, Filip Pizlo fpi...@apple.com wrote:
  On Sep 4, 2013, at 3:05 AM, Andreas Rossberg rossb...@google.com wrote:
  As part of binary data, typed arrays are implicitly constructed on
  the fly as views on a backing store. Any notion of identity -- which
  is the prerequisite for state -- is not particularly meaningful in
  this setting.
  
  Are you proposing changing how == and === work for typed arrays?  If not 
  then this whole argument is moot.
  
  No, they are just rather useless operations on data views. That
  doesn't make the argument moot.
 
 The point is that as soon as you're using the copy '=' on binary data fields, 
 you're already losing an observable notion of object identity.  The '=' here 
 is already unlike the '=' operator for languages that have true value types - 
 in those languages you wouldn't be able to observe if you got the *same* 
 typed array or a different one but with the same underlying data.  In JS you 
 will be able to observe this with '==' and '==='.  Hence, being able to also 
 observe that you got a different one because you lost some meta-data (i.e. 
 custom named properties) doesn't change the fact that the quirky semantics 
 were already observable to the user.
 
  
  Also, it is preferable to make them as lightweight as
  possible.
  
  See my previous mail. You gain zero space and zero performance from making 
  typed arrays non extensible.
  
  I think you are jumping to conclusions. You can very well optimize the
  representation of typed arrays if they don't have user-defined
  properties. Whether that's worth it I can't tell without experiments.
 
 I don't think this is a matter of opinion.  There is state that typed arrays 
 are required to store but that is not accessed on the most critical of hot 
 paths, which naturally allows us to play displaced pointer tricks.
 
 It would also be useful, if you want to argue this point, if you replied to 
 my previous discussion of why there is no performance difference between 
 expandable and non-expandable typed arrays.  I'll copy that here in case you 
 missed it:
 
 A typed array *must* know about the following bits of information:
 
 T: Its own type.
 B: A base pointer (not the buffer but the thing you index off of).
 L: Its length.
 
 But that only works if it owns its buffer - that is it was allocated using 
 for example new Int8Array(100) and you never used the .buffer property.  So 
 in practice you also need:
 
 R: Reserved space

Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Filip Pizlo

On Sep 4, 2013, at 11:33 AM, Brendan Eich bren...@mozilla.com wrote:

 Filip Pizlo wrote:
 So, how big are your non-expanddable typed arrays, and what do they look 
 like?  If they're not smaller than 16 bytes in the common case with 32-bit 
 pointers, or 32 bytes in the common case with 64-bit pointers, then there is 
 no performance argument in favor of getting rid of expandable properties.
 
 I like your analysis, it helps greatly to be quantitative and to talk 
 concretely about implementation trade-offs. However, I don't think it proves 
 as much as you assert.

Depends on your interpretation of what I'm asserting. ;-)  I'm talking about 
typed array objects - ones that can be pointed to by JavaScript values, and 
that have a prototype chain, can be aliased, etc.

 
 Suppose I want (as IBM did for years, and may still) to implement IEEE 754r 
 decimal in JS, with minimal overhead. I would need 128 bits of flat storage, 
 no variable length, no .buffer or aliasing, and *no expandos*. Can binary 
 data help me do that? If so, how small can the thing be? I'd like a multiple 
 of 16 bytes, but on 64-bit targets that does not leave enough room for TBLR 
 and we don't really need BLR anyway.
 
 If we can't implement efficient-enough 754r decimal using binary data, that's 
 sad. Not the end of the world, and it doesn't prove a whole lot about 
 anything (perhaps we'll figure out something next year). But the small, 
 fixed-size array case exists (think of Float32Array 4-vectors, homogeneous 
 coordinates). It seems to me you are giving this use-case short shrift.

I'm not.  I care deeply about small arrays.  This analysis wasn't merely a 
thought experiment, it arose from me spending a month trying to figure out how 
to aggressively reduce the overhead of typed arrays.  My original hope was to 
get down to Java-level overheads and my conclusion was that unless I wanted to 
severely punish anyone who said .buffer, I'd have to have one more word of 
overhead than Java (i.e. 16 bytes on 32-bit instead of 12 bytes on 32-bit).

My point is that having custom properties, or not, doesn't change the overhead 
for the existing typed array spec and hence has no effect on small arrays.  The 
reasons for this include:

- Typed arrays already have to be objects, and hence have a well-defined 
behavior on '=='.

- Typed arrays already have to be able to tell you that they are in fact typed 
arrays, since JS doesn't have static typing.

- Typed arrays already have prototypes, and those are observable regardless of 
expandability.  A typed array from one global object will have a different 
prototype than a typed array from a different global object.  Or am I 
misunderstanding the spec?

- Typed arrays already have to know about their buffer.

- Typed arrays already have to know about their offset into the buffer.  Or, 
more likely, they have to have a second pointer that points directly at the 
base from which they are indexed.

- Typed arrays already have to know their length.

You're not proposing changing these aspects of typed arrays, right?

The super short message is this: so long as an object obeys object identity on 
'==' then you can have free if unused, suboptimal if you use them custom 
properties by using a weak map on the side.  This is true of typed arrays and 
it would be true of any other object that does object-style ==.  If you 
allocate such an object and never add a custom property then the weak map will 
never have an entry for it; but if you put custom properties in the object then 
the map will have things in it.  But with typed arrays you can do even better 
as my previous message suggests: so long as an object has a seldom-touched 
field and you're willing to eat an extra indirection or an extra branch on that 
field, you can have free if unused, still pretty good if you use them custom 
properties by displacing that field.  Typed arrays have both of these 
properties right now and so expandability is a free lunch.

Still find this discussion amusing?  Here's the long story is: It is these 
things that I list above that lead to a 16 byte overhead on 32-bit, and a 
32-byte overhead on 64-bit in the best sane case.  Giving typed array objects 
expandability doesn't add to this overhead, because two of the fields necessary 
to implement the above (the type, and the buffer) can be displaced for pointing 
to property storage.  Any imaginable attempt to reduce the overhead incurred by 
the information - using BBOP (big bag of pages) for the type, using an 
out-of-line weak map for the buffer or the type, encoding some of the bits 
inside the pointer to the typed array, etc. - can be also used to eradicate any 
space overhead you'd need for custom properties, so long as you're on board 
with the free if unused, sub-optimal if you use them philosophy.

So if we did introduce a new type that has lower overheads, for example a new 
kind of typed arrays - or an entirely new kind of type, say Int64

Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Filip Pizlo

On Sep 4, 2013, at 12:17 PM, Niko Matsakis n...@alum.mit.edu wrote:

 I think Filip is right that given sufficient cleverness extensible
 properties for typed objects can be implemented efficiently. The real
 question is what the behavior SHOULD be. As others have pointed out,
 we are not forced to support extensible properties for web compat
 reasons.
 
 I also think it is very important and useful to have typed objects be
 a generalization of typed arrays. I suspect nobody wants an almost
 but not quite the same set of array types. It'd be my preference that
 (eventually) the specification for typed arrays can just be var
 Uint16Array = new ArrayType(uint16), which I believe is currently
 plausible.
 
 In light of this consideration, that means that adding exensible
 properties to typed arrays means adding extensible properties to all
 typed objects that are arrays (that is, instances of some type
 defined by `new ArrayType()`).
 
 As Dmitry pointed out, extensible properties is only possible for
 top-level objects. I think this results in a surprising and
 non-composable spec.
 
 The surprising behavior isn't limited to the copying example that
 Dmitry gave. Another problem is that instances of array types that are
 found embedded in other structures don't have the full capabilities of
 top-level instances. Without extensible properties, it is true that
 if I have a function that is given a typed object (of any kind, array
 or struct) and uses it, I can also provide it with an instance of that
 same type that is a part of a bigger structure.
 
 For example:
 
function doSomething(anArray) {
anArray[0] = anArray[1];
}
 
// Invoke doSomething with top-level array
var TwoUint8s = new ArrayType(uint8, 2);
doSomething(new TwoUint8s());
 
// Invoke doSomething with array that is
// embedded within a struct:
var MyStruct = StructType({a: TwoUint8s});
var instance = new MyStruct();
doSomething(instance.a);
 
 But this no longer works if `doSomething` makes use of extensible
 properties:
 
function doSomething(anArray) {
anArray[0] = anArray[1];
anArray.foo = anArray.bar;
}
 
 Now the second use case doesn't work.
 
 To me, it seems a shame to trade a simple story (typed objects let
 you define the layout and fields of an object, full stop) for a more
 complex, non-composable one (...except for extra fields on arrays,
 which only work some of the time).

Hi Niko,

The reason why I'm OK with the more complex story is that we already have that 
story for '=='.  To me, named object properties are analogous to being able to 
identify whether you have the same object or a different object: both are 
mechanisms that reveal aliasing to the user.  Having typed objects that are 
embedded in other ones already breaks ==.

-Filip


 
 
 
 Niko
 
 
 On Wed, Sep 04, 2013 at 08:11:14AM -0700, Filip Pizlo wrote:
 
 On Sep 4, 2013, at 7:55 AM, Andreas Rossberg rossb...@google.com wrote:
 
 On 4 September 2013 16:44, Filip Pizlo fpi...@apple.com wrote:
 On Sep 4, 2013, at 3:05 AM, Andreas Rossberg rossb...@google.com wrote:
 As part of binary data, typed arrays are implicitly constructed on
 the fly as views on a backing store. Any notion of identity -- which
 is the prerequisite for state -- is not particularly meaningful in
 this setting.
 
 Are you proposing changing how == and === work for typed arrays?  If not 
 then this whole argument is moot.
 
 No, they are just rather useless operations on data views. That
 doesn't make the argument moot.
 
 The point is that as soon as you're using the copy '=' on binary data 
 fields, you're already losing an observable notion of object identity.  The 
 '=' here is already unlike the '=' operator for languages that have true 
 value types - in those languages you wouldn't be able to observe if you got 
 the *same* typed array or a different one but with the same underlying data. 
  In JS you will be able to observe this with '==' and '==='.  Hence, being 
 able to also observe that you got a different one because you lost some 
 meta-data (i.e. custom named properties) doesn't change the fact that the 
 quirky semantics were already observable to the user.
 
 
 Also, it is preferable to make them as lightweight as
 possible.
 
 See my previous mail. You gain zero space and zero performance from making 
 typed arrays non extensible.
 
 I think you are jumping to conclusions. You can very well optimize the
 representation of typed arrays if they don't have user-defined
 properties. Whether that's worth it I can't tell without experiments.
 
 I don't think this is a matter of opinion.  There is state that typed arrays 
 are required to store but that is not accessed on the most critical of hot 
 paths, which naturally allows us to play displaced pointer tricks.
 
 It would also be useful, if you want to argue this point, if you replied to 
 my previous discussion of why there is no performance difference between

Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Niko Matsakis
On Wed, Sep 04, 2013 at 12:38:39PM -0700, Filip Pizlo wrote:
 The reason why I'm OK with the more complex story is that we already
 have that story for '=='.  To me, named object properties are
 analogous to being able to identify whether you have the same object
 or a different object: both are mechanisms that reveal aliasing to
 the user.  Having typed objects that are embedded in other ones
 already breaks ==.

I'm afraid I don't quite follow you here. The point is not that
extensible properties permit the user to observe aliasing: since
arrays are mutable, aliasing is observable even without `==` or
extensible properties.

Rather, I am saying that it seems desirable for all typed objects with
a particular type to support the same set of operations: but this is
not possible if we permit extensible properties on typed arrays, since
there will always be a distinction between a top-level array (i.e.,
one that owns its own memory) and a derived array (one that aliases
another object).

[Well, I suppose it would be possible to permit *all* array instances
 to have extensible properties, whether they are derived or not, but
 that seems surprising indeed. It would imply that if you did
 something like:

 var MyArray = new ArrayType(...);
 var MyStruct = new ArrayType({f: MyArray});
 var struct = new MyStruct(...);
 var array1 = struct.f;
 var array2 = struct.f;

 then `array1` and `array2` would have disjoint sets of extensible
 properties.]


Niko
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Filip Pizlo

On Sep 4, 2013, at 1:00 PM, Niko Matsakis n...@alum.mit.edu wrote:

 On Wed, Sep 04, 2013 at 12:38:39PM -0700, Filip Pizlo wrote:
 The reason why I'm OK with the more complex story is that we already
 have that story for '=='.  To me, named object properties are
 analogous to being able to identify whether you have the same object
 or a different object: both are mechanisms that reveal aliasing to
 the user.  Having typed objects that are embedded in other ones
 already breaks ==.
 
 I'm afraid I don't quite follow you here. The point is not that
 extensible properties permit the user to observe aliasing: since
 arrays are mutable, aliasing is observable even without `==` or
 extensible properties.

Ah, sorry, I was unclear.  My point is that given two typed arrays a and b, 
both == and custom properties allow you to tell the difference between a and b 
sharing the same backing data (the kind of aliasing you speak of) and actually 
being the same object.

== allows you to do this because either a == b evaluates true or it evaluates 
false.  If you allocate a typed array 'a' and then store it into a binary data 
field and then load from that field later into a variable 'b', then a != b.  
Hence, you've observed that a and b don't point to the same object.

Likewise, custom named properties would also allow you to make the same 
observation.  If you allocate a typed array 'a', then store a custom field into 
it ('a.foo = 42'), then store it into a binary data field and later load from 
it into 'b', then 'b.foo != 42'.  Hence, again, you've observed that a and b 
don't point to the same object even though they are both wrappers for the same 
underlying array data.

I agree that both of these aspects of binary data are quirky.  My observation 
is that prohibiting custom properties doesn't fix the underlying issue.

 
 Rather, I am saying that it seems desirable for all typed objects with
 a particular type to support the same set of operations: but this is
 not possible if we permit extensible properties on typed arrays, since
 there will always be a distinction between a top-level array (i.e.,
 one that owns its own memory) and a derived array (one that aliases
 another object).

Right but that distinction is already there for ==.

 
 [Well, I suppose it would be possible to permit *all* array instances
 to have extensible properties, whether they are derived or not, but
 that seems surprising indeed. It would imply that if you did
 something like:
 
 var MyArray = new ArrayType(...);
 var MyStruct = new ArrayType({f: MyArray});
 var struct = new MyStruct(...);
 var array1 = struct.f;
 var array2 = struct.f;
 
 then `array1` and `array2` would have disjoint sets of extensible
 properties.]

Yes, they would.  But even if they didn't, then array1 != array2, which is 
equally odd.

-Filip


 
 
 Niko

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Brendan Eich

Filip Pizlo mailto:fpi...@apple.com
September 4, 2013 12:34 PM
My point is that having custom properties, or not, doesn't change the 
overhead for the existing typed array spec and hence has no effect on 
small arrays.  The reasons for this include:


- Typed arrays already have to be objects, and hence have a 
well-defined behavior on '=='.


- Typed arrays already have to be able to tell you that they are in 
fact typed arrays, since JS doesn't have static typing.


- Typed arrays already have prototypes, and those are observable 
regardless of expandability.  A typed array from one global object 
will have a different prototype than a typed array from a different 
global object.  Or am I misunderstanding the spec?


- Typed arrays already have to know about their buffer.

- Typed arrays already have to know about their offset into the 
buffer.  Or, more likely, they have to have a second pointer that 
points directly at the base from which they are indexed.


- Typed arrays already have to know their length.

You're not proposing changing these aspects of typed arrays, right?


Of course not, but for very small fixed length arrays whose .buffer is 
never accessed, an implementation might optimize harder. It's hard for 
me to say no, Filip's analysis shows that's never worthwhile, for all 
time.


The super short message is this: so long as an object obeys object 
identity on '==' then you can have free if unused, suboptimal if you 
use them custom properties by using a weak map on the side.  This is 
true of typed arrays and it would be true of any other object that 
does object-style ==.  If you allocate such an object and never add a 
custom property then the weak map will never have an entry for it; but 
if you put custom properties in the object then the map will have 
things in it.  But with typed arrays you can do even better as my 
previous message suggests: so long as an object has a seldom-touched 
field and you're willing to eat an extra indirection or an extra 
branch on that field, you can have free if unused, still pretty good 
if you use them custom properties by displacing that field.  Typed 
arrays have both of these properties right now and so expandability is 
a free lunch.


The last sentence makes a for-all assertion I don't think 
implementations must be constrained by. Small fixed-length arrays whose 
.buffer is never accessed (which an implementation might be able to 
prove by type inference) could be optimized harder.


The lack of static types in JS does not mean exactly one implementation 
representation must serve for all instances of a given JS-level 
abstraction. We already have strings optimized variously in the top VMs, 
including Chords or Ropes, dependent strings, different character sets, etc.


Still find this discussion amusing?  Here's the long story is: It is 
these things that I list above that lead to a 16 byte overhead on 
32-bit, and a 32-byte overhead on 64-bit in the best sane case. 
 Giving typed array objects expandability doesn't add to this 
overhead, because two of the fields necessary to implement the above 
(the type, and the buffer) can be displaced for pointing to property 
storage.  Any imaginable attempt to reduce the overhead incurred by 
the information - using BBOP (big bag of pages) for the type, using an 
out-of-line weak map for the buffer or the type, encoding some of the 
bits inside the pointer to the typed array, etc. - can be also used to 
eradicate any space overhead you'd need for custom properties, so long 
as you're on board with the free if unused, sub-optimal if you use 
them philosophy.


For something like decimal, it matters whether there's an empty side 
table and large-N decimal instances of total size N*S, vs. N*(S+K) for 
some constant K we could eliminate by specializing harder. Even better 
if we agree that decimal instances should be non-extensible (and have 
value not reference semantics -- more below).


- If the VM wants to go further and create immediate representations 
of some or all Int64's, similarly to what VMs do for JS small integers 
today, then the main problem you run into is object identity: does 
Int64(1).add(Int64(1)) == Int64(1).add(Int64(1))?  A naive JS 
implementation of an Int64 class would say that this is false, since 
it's likely to allocate a new Int64 each time.  But an immediate 
representation would have no choice but to say true.  You can work 
around this if you say that the VM's implementation of Int64 
operations behaves /as if/ the add()/sub()/whatever() methods used a 
singleton cache.  You can still then have custom properties; i.e. you 
could do Int64(2).foo = 42 and then Int64(1).add(Int64(1)).foo will 
return 42, since the VM can keep an 
immediate-int64-to-customproperties map on the side.  That's kind of 
analogous to how you could put a setter on field '2' of 
Array.prototype and do some really hilarious things.


The value objects proposal for ES7 is live, I'm championing

Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Filip Pizlo

On Sep 4, 2013, at 3:09 PM, Brendan Eich bren...@mozilla.com wrote:

 Filip Pizlo mailto:fpi...@apple.com
 September 4, 2013 12:34 PM
 My point is that having custom properties, or not, doesn't change the 
 overhead for the existing typed array spec and hence has no effect on small 
 arrays.  The reasons for this include:
 
 - Typed arrays already have to be objects, and hence have a well-defined 
 behavior on '=='.
 
 - Typed arrays already have to be able to tell you that they are in fact 
 typed arrays, since JS doesn't have static typing.
 
 - Typed arrays already have prototypes, and those are observable regardless 
 of expandability.  A typed array from one global object will have a 
 different prototype than a typed array from a different global object.  Or 
 am I misunderstanding the spec?
 
 - Typed arrays already have to know about their buffer.
 
 - Typed arrays already have to know about their offset into the buffer.  Or, 
 more likely, they have to have a second pointer that points directly at the 
 base from which they are indexed.
 
 - Typed arrays already have to know their length.
 
 You're not proposing changing these aspects of typed arrays, right?
 
 Of course not, but for very small fixed length arrays whose .buffer is never 
 accessed, an implementation might optimize harder.

As I said, of course you can do this, and one way you could try harder is to 
put the buffer pointer in a side table.  The side table maps array object 
pointers to their buffers, and you only make an entry in this table if .buffer 
is mentioned.

But if we believe that this is a sensible thing for a VM to do - and of course 
it is! - then the same thing can be done for the custom property storage 
pointer.

 It's hard for me to say no, Filip's analysis shows that's never worthwhile, 
 for all time.
 
 The super short message is this: so long as an object obeys object identity 
 on '==' then you can have free if unused, suboptimal if you use them 
 custom properties by using a weak map on the side.  This is true of typed 
 arrays and it would be true of any other object that does object-style ==.  
 If you allocate such an object and never add a custom property then the weak 
 map will never have an entry for it; but if you put custom properties in the 
 object then the map will have things in it.  But with typed arrays you can 
 do even better as my previous message suggests: so long as an object has a 
 seldom-touched field and you're willing to eat an extra indirection or an 
 extra branch on that field, you can have free if unused, still pretty good 
 if you use them custom properties by displacing that field.  Typed arrays 
 have both of these properties right now and so expandability is a free lunch.
 
 The last sentence makes a for-all assertion I don't think implementations 
 must be constrained by.

How so?  It is true that some VM implementations will be better than others.  
But ultimately every VM can implement every optimization that every other VM 
has; in fact my impression is that this is exactly what is happening as we 
speak.

So, it doesn't make much sense to make language design decisions because it 
might make some implementor's life easier right now.  If you could argue that 
something will never be efficient if we add feature X, then that might be an 
interesting argument.  But as soon as we identify one sensible optimization 
strategy for making something free, I would tend to think that this is 
sufficient to conclude that the feature is free and there is no need to 
constrain it.  If we don't do this then we risk adding cargo-cult performance 
features that rapidly become obsolete.

 Small fixed-length arrays whose .buffer is never accessed (which an 
 implementation might be able to prove by type inference) could be optimized 
 harder.

And my point is that if you do so, then the same technique can be trivially 
applied to the custom property storage pointer.

 
 The lack of static types in JS does not mean exactly one implementation 
 representation must serve for all instances of a given JS-level abstraction. 
 We already have strings optimized variously in the top VMs, including Chords 
 or Ropes, dependent strings, different character sets, etc.
 
 Still find this discussion amusing?  Here's the long story is: It is these 
 things that I list above that lead to a 16 byte overhead on 32-bit, and a 
 32-byte overhead on 64-bit in the best sane case.  Giving typed array 
 objects expandability doesn't add to this overhead, because two of the 
 fields necessary to implement the above (the type, and the buffer) can be 
 displaced for pointing to property storage.  Any imaginable attempt to 
 reduce the overhead incurred by the information - using BBOP (big bag of 
 pages) for the type, using an out-of-line weak map for the buffer or the 
 type, encoding some of the bits inside the pointer to the typed array, etc. 
 - can be also used to eradicate any space overhead you'd need

Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Oliver Hunt

On Sep 4, 2013, at 4:15 PM, Filip Pizlo fpi...@apple.com wrote:

 
 On Sep 4, 2013, at 3:09 PM, Brendan Eich bren...@mozilla.com wrote:
 
snip solo much text :D 
 But with ES5 Object.preventExtensions, etc., the horse is out of the barn.
 
 It's there and we have to support it, and the fact that you can do 
 preventExtensions() to an object is a good thing.  That doesn't mean it 
 should become the cornerstone for every new feature.  If a user wants to 
 preventExtensions() on their object, then that's totally cool - and I'm not 
 arguing that it isn't.
 
 The argument I'm making is a different one: should an object be 
 non-expandable by default?

Actually, here's a very good example:  Why do Maps and Sets allow expandos?

* They are logically buckets so an expando properties seem unnecessary
* We have seen in the past that people get really confused about property 
access -- see the enumerable associative array articles that do new Array() 
to get there magical associative array.  For (probably) common cases of string 
and numeric properties:
  - someMap[foo]=bar and someMap[foo]; vs.
  - someMap.set(foo, bar) and someMap.get(foo)

are sufficiently close to the same that developers _will_ do this, and think 
that they're using a Map.

So should Map be inextensible by default? The argument against supporting 
expandos on a  typed array seems even stronger for these collection types.

--Oliver
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Filip Pizlo

On Sep 4, 2013, at 2:41 PM, Brendan Eich bren...@mozilla.com wrote:

 Filip Pizlo wrote:
 I agree that both of these aspects of binary data are quirky.  My 
 observation is that prohibiting custom properties doesn't fix the underlying 
 issue.
 
 [snip]
var MyArray = new ArrayType(...);
var MyStruct = new ArrayType({f: MyArray});
var struct = new MyStruct(...);
var array1 = struct.f;
var array2 = struct.f;
 
 then `array1` and `array2` would have disjoint sets of extensible
 properties.]
 
 Yes, they would.  But even if they didn't, then array1 != array2, which is 
 equally odd.
 
 Both adds up to two, not one, so the counterargument is odd beats odder, and 
 prohibiting expandos keeps the oddness down to == and nothing else.

What about being consistently odd?  To me, 'struct.f' means allocating a new 
[sic] array buffer view object.  That new object thus has all of the features 
you'd expect from a new object: it won't have custom properties, it will have 
some default prototype, and it will not be == to any other object.  Hence if 
you say struct.f.foo = 42, then struct.f.foo will subsequently return 
undefined.  No big deal - it was a new object.

 
 I'm not trying to persuade you here, just trying to agree on how to do 
 oddness accounting. It could be that we're better off with the oddness you 
 prefer, for human factors reasons of some kind.

I actually think that this simply isn't going to matter.  Binary data is there 
for hacking with bits.  Whether a struct.f, which is defined by the user to be 
an array, is expandable or not isn't going to be a big deal to most people.

On the other hand, empowering users to be able to carry around typed arrays 
with some extra meta-data could be useful to people.

 
 But lost expandos due to loss of identity are an especially nasty kind of 
 bug to find.

I'm actually curious - are you aware of such bugs, and what do they actually 
look like?  To me this is analogous to the question of whether an API returns 
to you the *same* object you passed in earlier, or a new object that is a copy 
- and my vague recollection of the various APIs and SDKs that I've used over 
the years is that whenever I see such issues, I make a note of them but never 
find myself having to think very hard about them.  And they rarely lead to 
interesting bugs.

 Is there any use-case here? We've never had a bug report asking us to make 
 SpiderMonkey's typed arrays extensible, AFAIK.

I was the one who brought up the use case. ;-)  Say I want a matrix.  I like 
saying:

function makeMatrix(rows, cols) {
var result = new Float32Array(rows * cols);
result.rows = rows;
result.cols = cols;
return result;
}

I realize this is goofy - I could have created a wrapper object around the 
Float32Array.  But that requires more code, and I've come to enjoy doing this 
kind of goofiness in scripting languages.

 
 /be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Brendan Eich

Filip Pizlo wrote:
Typed arrays have both of these properties right now and so 
expandability is a free lunch.


The last sentence makes a for-all assertion I don't think 
implementations must be constrained by. 


How so? It is true that some VM implementations will be better than 
others. But ultimately every VM can implement every optimization that 
every other VM has; in fact my impression is that this is exactly what 
is happening as we speak.


My for-all referred to all typed arrays across all VMs, not just all VMs.

Also just as a point of fact (something done, the Latin root means 
deed), I do not see the same optimizations being used in all VMs. For 
example, SpiderMonkey's TI (written up here: http:// 
rfrn.org/~shu/drafts/ti.pdf‎ [PLDI 2012]) is not being used elsewhere 
AFAIK -- please correct me if I'm mistaken.


So, it doesn't make much sense to make language design decisions 
because it might make some implementor's life easier right now. If you 
could argue that something will /never/ be efficient if we add feature 
X, then that might be an interesting argument. But as soon as we 
identify one sensible optimization strategy for making something free, 
I would tend to think that this is sufficient to conclude that the 
feature is free and there is no need to constrain it. If we don't do 
this then we risk adding cargo-cult performance features that rapidly 
become obsolete.


I agree that's a risk. I'm also with Niko in wanting to argue about what 
the semantics should be without appealing to performance arguments.


However, I still think you are verging on promising a free lunch. All 
methods in C++ cannot affordably be virtual. Expandos in JS cost. At 
fine enough grain, even pretty-well-predicted branches cost. Nothing is 
free-enough to discount forever in my bitter and long experience :-P.


The lack of static types in JS does not mean exactly one 
implementation representation must serve for all instances of a given 
JS-level abstraction. We already have strings optimized variously in 
the top VMs, including Chords or Ropes, dependent strings, different 
character sets, etc.


Still find this discussion amusing? Here's the long story is: It is 
these things that I list above that lead to a 16 byte overhead on 
32-bit, and a 32-byte overhead on 64-bit in the best sane case. 
Giving typed array objects expandability doesn't add to this 
overhead, because two of the fields necessary to implement the above 
(the type, and the buffer) can be displaced for pointing to property 
storage. Any imaginable attempt to reduce the overhead incurred by 
the information - using BBOP (big bag of pages) for the type, using 
an out-of-line weak map for the buffer or the type, encoding some of 
the bits inside the pointer to the typed array, etc. - can be also 
used to eradicate any space overhead you'd need for custom 
properties, so long as you're on board with the free if unused, 
sub-optimal if you use them philosophy.


For something like decimal, it matters whether there's an empty side 
table and large-N decimal instances of total size N*S, vs. N*(S+K) 
for some constant K we could eliminate by specializing harder. Even 
better if we agree that decimal instances should be non-extensible 
(and have value not reference semantics -- more below).


With a side table, the constant K = 0 even if you have custom 
properties. The table will only have an entry for those instances that 
had custom properties.


I know, that's why I was attacking the non-side-table approach.

But the side table has its own down-side trade-offs: GC complexity, even 
costlier indirection, and strictly greater implementation complexity. If 
one could implement without having to mess with this K ?= 0 design 
decision and hassle with packing or else using a side-table, one's VM 
would be smaller, simpler, less buggy -- all else equal.


Now you may say that I'm betraying my hero Mr. Spock, whom I have 
invoked to argue that implementors should sacrifice so the mass of JS 
users can live long and prosper.


And you'd have me dead to rights -- if I thought JS users wanted 
expandos on binary data, that the lack of expandos there was a problem 
akin to the whole starship being blown up. But I do not believe that's 
the case.


If users don't care, then implementors should get a break and VMs should 
be simpler, ceteris paribus.


- If the VM wants to go further and create immediate representations 
of some or all Int64's, similarly to what VMs do for JS small 
integers today, then the main problem you run into is object 
identity: does Int64(1).add(Int64(1)) == Int64(1).add(Int64(1))? A 
naive JS implementation of an Int64 class would say that this is 
false, since it's likely to allocate a new Int64 each time. But an 
immediate representation would have no choice but to say true. You 
can work around this if you say that the VM's implementation of 
Int64 operations behaves /as if/ the add()/sub()/whatever() methods 
used

Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Brendan Eich

Filip Pizlo mailto:fpi...@apple.com
September 4, 2013 4:45 PM

On Sep 4, 2013, at 2:41 PM, Brendan Eich bren...@mozilla.com 
mailto:bren...@mozilla.com wrote:



Filip Pizlo wrote:
I agree that both of these aspects of binary data are quirky.  My 
observation is that prohibiting custom properties doesn't fix the 
underlying issue.


[snip]

   var MyArray = new ArrayType(...);
   var MyStruct = new ArrayType({f: MyArray});
   var struct = new MyStruct(...);
   var array1 = struct.f;
   var array2 = struct.f;

then `array1` and `array2` would have disjoint sets of extensible
properties.]


Yes, they would.  But even if they didn't, then array1 != array2, 
which is equally odd.


Both adds up to two, not one, so the counterargument is odd beats 
odder, and prohibiting expandos keeps the oddness down to == and 
nothing else.


What about being consistently odd?  To me, 'struct.f' means allocating 
a new [sic] array buffer view object.  That new object thus has all of 
the features you'd expect from a new object: it won't have custom 
properties, it will have some default prototype, and it will not be == 
to any other object.  Hence if you say struct.f.foo = 42, then 
struct.f.foo will subsequently return undefined.  No big deal - it 
was a new object.


In WebIDL, IIRC, this is considered bad style. You are supposed to use a 
method, not an IDL attribute (getter), when returning a fresh object 
each time. Anne can vouch, cc'ing him.


I'm not saying we must match WebIDL good style -- doing so would mean 
binary data does not support interior extractions via struct.foo.bar -- 
rather something awful such as struct.foo().bar, which won't fly. But it 
seems worth considering that the odd or bad style design decision is 
upstream of our petty difference over expandos.


But lost expandos due to loss of identity are an especially nasty 
kind of bug to find.


I'm actually curious - are you aware of such bugs, and what do they 
actually look like?


Search for disappearing expando on the web.

/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Niko Matsakis
On Wed, Sep 04, 2013 at 02:41:24PM -0700, Brendan Eich wrote:
 Both adds up to two, not one, so the counterargument is odd beats
 odder, and prohibiting expandos keeps the oddness down to == and
 nothing else.

I just want to agree with this. I think the behavior of `==` is
unfortunate, but much more of a corner case than expando properties
disappearing and reappearing willy nilly as users switch between
otherwise identical views on the same data.

I feel like `==` can be explained by the fact that these are distinct
objects pointing at the same underyling buffer. People get aliasing.
But the fact that expando properties would live *on the wrapper* and
not on the *underlying data* is quite surprising and counterintuitive
-- some of the data (the elements) are aliased, but others are not.


Niko
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-04 Thread Filip Pizlo

On Sep 4, 2013, at 5:25 PM, Brendan Eich bren...@mozilla.com wrote:

 Filip Pizlo wrote:
 Typed arrays have both of these properties right now and so expandability 
 is a free lunch.
 
 The last sentence makes a for-all assertion I don't think implementations 
 must be constrained by. 
 
 How so? It is true that some VM implementations will be better than others. 
 But ultimately every VM can implement every optimization that every other VM 
 has; in fact my impression is that this is exactly what is happening as we 
 speak.
 
 My for-all referred to all typed arrays across all VMs, not just all VMs.
 
 Also just as a point of fact (something done, the Latin root means deed), 
 I do not see the same optimizations being used in all VMs. For example, 
 SpiderMonkey's TI (written up here: http:// rfrn.org/~shu/drafts/ti.pdf‎ 
 [PLDI 2012]) is not being used elsewhere AFAIK -- please correct me if I'm 
 mistaken.

Interesting point.  Equivalent optimizations are being done.  Other VMs also 
infer types one way or another.  And I'd argue that my way of inferring types 
is the best - it incurs smaller overheads for start-up while achieving more 
precise results.  (Of course I must say that - I stand by my stuff, heh.)  That 
being said, I do think that FF's TI is really cool and loved reading that paper.

It's kind of like in JVMs, all of the big-league ones did speculative inlining 
- but they do it in radically different ways and rely on different kinds of 
feedback and if you go to a conference where JVM hackers show up, they will 
argue about which is best.  I have fond memories of Sun vs. IBM vs. Oracle 
shouting matches about how you do deoptimization, whether you do deoptimization 
at all, and what you need to analyze and prove things about the class 
hierarchy.  That doesn't change the basics: they all do speculative inlining 
and it performs sort of the same in the end.

I suspect that the same thing is becoming true of typed arrays, regardless of 
whether they are extensible or not.  I guess that when I said every 
optimization that every other VM has I didn't mean literally using the same 
exact algorithm - just performing optimizations that achieve equivalent results.

 
 So, it doesn't make much sense to make language design decisions because it 
 might make some implementor's life easier right now. If you could argue that 
 something will /never/ be efficient if we add feature X, then that might be 
 an interesting argument. But as soon as we identify one sensible 
 optimization strategy for making something free, I would tend to think that 
 this is sufficient to conclude that the feature is free and there is no need 
 to constrain it. If we don't do this then we risk adding cargo-cult 
 performance features that rapidly become obsolete.
 
 I agree that's a risk. I'm also with Niko in wanting to argue about what the 
 semantics should be without appealing to performance arguments.

Right!  I guess my first order argument is that performance *isn't* an argument 
in favor of non-expandability.

 
 However, I still think you are verging on promising a free lunch. All methods 
 in C++ cannot affordably be virtual. Expandos in JS cost. At fine enough 
 grain, even pretty-well-predicted branches cost. Nothing is free-enough to 
 discount forever in my bitter and long experience :-P.

I am promising a free lunch!  Virtual methods in C++ are only expensive because 
C++ still doesn't have feedback-driven optimization.  JVMs make them free in 
Java.  And they are free.  Period.  There is no upside to marking a method 
final in Java.  I am arguing that expandos are similar.

 
 The lack of static types in JS does not mean exactly one implementation 
 representation must serve for all instances of a given JS-level 
 abstraction. We already have strings optimized variously in the top VMs, 
 including Chords or Ropes, dependent strings, different character sets, etc.
 
 Still find this discussion amusing? Here's the long story is: It is these 
 things that I list above that lead to a 16 byte overhead on 32-bit, and a 
 32-byte overhead on 64-bit in the best sane case. Giving typed array 
 objects expandability doesn't add to this overhead, because two of the 
 fields necessary to implement the above (the type, and the buffer) can be 
 displaced for pointing to property storage. Any imaginable attempt to 
 reduce the overhead incurred by the information - using BBOP (big bag of 
 pages) for the type, using an out-of-line weak map for the buffer or the 
 type, encoding some of the bits inside the pointer to the typed array, 
 etc. - can be also used to eradicate any space overhead you'd need for 
 custom properties, so long as you're on board with the free if unused, 
 sub-optimal if you use them philosophy.
 
 For something like decimal, it matters whether there's an empty side table 
 and large-N decimal instances of total size N*S, vs. N*(S+K) for some 
 constant K we could eliminate by specializing harder

Re: Non-extensibility of Typed Arrays

2013-09-04 Thread K. Gadd
Did anyone address what should be done in the use case where it's necessary
for information to 'tag along' with an array or typed array, for interop
purposes? The existence of interior binary data objects seems to complicate
this further; for example I had said that it seems like WeakMap allows
attaching information to a typed array in that case even if it isn't
extensible. If interior objects lose identity, though, it now becomes
*literally impossible* for data to follow an instance of Uint32array (or
whatever) around the runtime, which is kind of troubling. Obviously I
understand *why* this is the case for interior objects.

Is the meaning of an assignment to an interior object well specified? The
data is copied from the source typed array into the interior object, I
assume.

I'm going to describe how I understand things and from that how it seems
like they could work:
At present when you construct a typed array it is a view over a particular
buffer. You can construct an array with a size `new Uint32Array(32)` in
which case a buffer is allocated for you behind the scenes; you can
construct an array from a buffer + offset/size pair in order to create a
view over a subregion of the buffer. In both cases, the 'array' does not
actually represent or contain the data, it is merely a proxy of sorts
through which you can access elements of a particular type.
It is my understanding that this is the same for binary data types: you can
construct a heap instance of one, in which case it has an invisible backing
buffer, or you can 'construct' one from an existing buffer+offset, in which
case it is more like a proxy that represents the given data type at that
given offset in the buffer, and when you manipulate the proxy you are
manipulating the content of the buffer.

In both cases, I believe it is consistent that these objects are all
'views' or 'proxies', not actual data. The fact that you can create an
instance directly creates the *illusion* of them being actual data but in
every case it is possible for multiple instances to share the same backing
store without sharing referential identity (via ===).

In both cases, I don't believe a user should expect that attaching an
expando to one object instance should modify the expandos on another object
instance. Given this, it seems perfectly reasonable to be able to attach
expandos to a typed array, and I've previously described why this use case
is relevant (interop between compilers targeting JS, and native
hand-written JS, for one).

In the same sense, if typed arrays must be constructed to act as proxies
for the 'interior' arrays in a binary data type, being able to attach
expandos to them does not cause much harm, other than the fact that the
lifetime of the expando does not match the lifetime of the underlying
binary data. But this is already true for typed arrays, in a sense.

I think the best way to address the confusion of expandos on interior
arrays is simply non-extensibility, as has been discussed. I don't see why
non-extensibility for interior arrays requires crippling the functionality
of typed arrays in general, since JS already seems to have 2-3 exposed
concepts in this field (seal, freeze, preventExtensions) along with query
methods to find out if those concepts apply to a given object (isSealed,
isFrozen, isExtensible)

If interior arrays are not extensible, I should hope that
Object.isExtensible for them returns false. If it were to return true when
they have no expando support that would be incredibly confusing.

Anyway, given all this I would propose that the optimal solution (in terms
of usability, at least - can't speak for the performance consequences) is
for typed arrays to be extensible by default, as they are Objects that
point to underlying sequences of elements, just like Array. This gives good
symmetry and lets you cleanly substitute a typed array for an Array in more
cases (resizability and mixed types being the big remaining differences).
In cases where extensibility is a trap for the unwary or actively
undesirable, like interior objects, the instance should be made
non-extensible. This allows all end user code to handle cases where it is
passed an interior array or object without reducing the usefulness of typed
arrays.

FWIW I would also argue that a free-standing instance of any Binary Data
type (that you construct with new, not using an existing buffer) should
maybe be extensible by default as well, even if 'interior' instances are
not. However, making binary data types always non-extensible wouldn't
exactly break any compatibility or use cases, since they're a new feature -
but it does mean now we have to add checks for extensibility/typeof in more
cases, which is awful...

(A related area where this is a big problem for me and authors of similar
packages is emulating the java/C# 'getHashCode' pattern, where objects all
have an associated static hash code. Implementing this often requires
attaching the computed hash to the object

Re: Non-extensibility of Typed Arrays

2013-08-31 Thread Brendan Eich

Filip Pizlo wrote:
I think it's better if you pick one use case and get it totally right. 
 You're not going to get the optimize my JS code with types use case 
right.  So stick to the binary data mapping use case, and allow 
arbitrary aliasing.


I am on the same page; any quibbling from me about your question 2 has 
an answer no stronger than the machine-type info doesn't hurt 
performance, and could help some engines.


It would hurt developers if they fell under the influence of some bad 
use-structs-for-speed cult, but (especially from what you say about JSC) 
this doesn't sound like a big risk.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Allen Wirfs-Brock
This thread has convinced my that Typed Arrays should be born extensible. 

Actually, my subclass example in the thread started me down that path.  In many 
cases where you might subclass an array you will want to add per instance 
state.  You can expose a getter/setter on the prototype but the state still 
needs to be associated with the individual instances.  Expando properties (or 
even better properties added in the @@create method) are the most natural way 
to represent that state.

The Firefox implementors will make this change if it represents TC39 consensus.

I'll put this item on the agenda for the next meeting and see if we can agree 
on extensible Typed Arrays.

Allen




On Aug 28, 2013, at 11:01 PM, Filip Pizlo wrote:

 Here's the part that gets me, though: what is the value of disallowing named 
 properties on typed arrays?  Who does this help?
 
 I don't quite buy that this helps users; most of the objects in your program 
 are going to allow custom properties to be added at any point.  That's kind 
 of the whole point of programming in a dynamic language.  So having one type 
 where it's disallowed doesn't help to clarify thinking.
 
 I also don't buy that it makes anything more efficient.  We only incur 
 overhead from named properties if you actually add named properties to a 
 typed array, and in that case we incur roughly the overhead you'd expect 
 (those named properties are a touch slower than named properties on normal 
 objects, and you obviously need to allocate some extra space to store those 
 named properties).
 
 -Filip
 
 
 
 On Aug 28, 2013, at 10:52 PM, Steve Fink sph...@gmail.com wrote:
 
 On 08/27/2013 09:35 AM, Oliver Hunt wrote:
 My complaint is that this appears to be removing functionality that has 
 been present in the majority of shipping TA implementations, assuming from 
 LH's comment that Chakra supports expandos.
 
 Note that even in the engines that support expandos, they will probably
 not survive a structured clone. I just tried in Chrome and they get
 stripped off. This further limits their utility in today's Web.
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Allen Wirfs-Brock

On Aug 30, 2013, at 8:53 AM, Mark S. Miller wrote:

 Dave Herman's And the other consistency dimension is between array types and 
 struct types. Is anyone arguing that structs should also have expandos? 
 surprised me, and convinced me of the opposite conclusion. Do you think 
 instances of struct types should be extensible?

I think the right-way to think about structs is as an record structure with no 
properties fixed behavior provided by a wrapper.  Very similar to the ES 
primitives except that structs can be mutable.  The way to associate properties 
with structs is to encapsulate them in an object, preferably via a class 
definition. If we go that route we can reach the point where ES classes have 
fixed-shape internal state defined as-if by a struct.

Typed Arrays are a different beast that already exist in the real world.  I 
don't see any need for consistency between Typed Arrays and struct types. 
Consistency between Typed Arrays and Array is more important.

Allen







 
 
 On Fri, Aug 30, 2013 at 8:48 AM, Allen Wirfs-Brock al...@wirfs-brock.com 
 wrote:
 This thread has convinced my that Typed Arrays should be born extensible. 
 
 Actually, my subclass example in the thread started me down that path.  In 
 many cases where you might subclass an array you will want to add per 
 instance state.  You can expose a getter/setter on the prototype but the 
 state still needs to be associated with the individual instances.  Expando 
 properties (or even better properties added in the @@create method) are the 
 most natural way to represent that state.
 
 The Firefox implementors will make this change if it represents TC39 
 consensus.
 
 I'll put this item on the agenda for the next meeting and see if we can agree 
 on extensible Typed Arrays.
 
 Allen
 
 
 
 
 On Aug 28, 2013, at 11:01 PM, Filip Pizlo wrote:
 
 Here's the part that gets me, though: what is the value of disallowing named 
 properties on typed arrays?  Who does this help?
 
 I don't quite buy that this helps users; most of the objects in your program 
 are going to allow custom properties to be added at any point.  That's kind 
 of the whole point of programming in a dynamic language.  So having one type 
 where it's disallowed doesn't help to clarify thinking.
 
 I also don't buy that it makes anything more efficient.  We only incur 
 overhead from named properties if you actually add named properties to a 
 typed array, and in that case we incur roughly the overhead you'd expect 
 (those named properties are a touch slower than named properties on normal 
 objects, and you obviously need to allocate some extra space to store those 
 named properties).
 
 -Filip
 
 
 
 On Aug 28, 2013, at 10:52 PM, Steve Fink sph...@gmail.com wrote:
 
 On 08/27/2013 09:35 AM, Oliver Hunt wrote:
 My complaint is that this appears to be removing functionality that has 
 been present in the majority of shipping TA implementations, assuming from 
 LH's comment that Chakra supports expandos.
 
 Note that even in the engines that support expandos, they will probably
 not survive a structured clone. I just tried in Chrome and they get
 stripped off. This further limits their utility in today's Web.
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 
 
 
 -- 
 Cheers,
 --MarkM

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Mark S. Miller
Dave Herman's And the other consistency dimension is between array types
and struct types. Is anyone arguing that structs should also have
expandos? surprised me, and convinced me of the opposite conclusion. Do
you think instances of struct types should be extensible?


On Fri, Aug 30, 2013 at 8:48 AM, Allen Wirfs-Brock al...@wirfs-brock.comwrote:

 This thread has convinced my that Typed Arrays should be born extensible.

 Actually, my subclass example in the thread started me down that path.
  In many cases where you might subclass an array you will want to add
 per instance state.  You can expose a getter/setter on the prototype but
 the state still needs to be associated with the individual instances.
  Expando properties (or even better properties added in the @@create
 method) are the most natural way to represent that state.

 The Firefox implementors will make this change if it represents TC39
 consensus.

 I'll put this item on the agenda for the next meeting and see if we can
 agree on extensible Typed Arrays.

 Allen




 On Aug 28, 2013, at 11:01 PM, Filip Pizlo wrote:

 Here's the part that gets me, though: what is the value of disallowing
 named properties on typed arrays?  Who does this help?

 I don't quite buy that this helps users; most of the objects in your
 program are going to allow custom properties to be added at any point.
  That's kind of the whole point of programming in a dynamic language.  So
 having one type where it's disallowed doesn't help to clarify thinking.

 I also don't buy that it makes anything more efficient.  We only incur
 overhead from named properties if you actually add named properties to a
 typed array, and in that case we incur roughly the overhead you'd expect
 (those named properties are a touch slower than named properties on normal
 objects, and you obviously need to allocate some extra space to store those
 named properties).

 -Filip



 On Aug 28, 2013, at 10:52 PM, Steve Fink sph...@gmail.com wrote:

 On 08/27/2013 09:35 AM, Oliver Hunt wrote:

 My complaint is that this appears to be removing functionality that has
 been present in the majority of shipping TA implementations, assuming from
 LH's comment that Chakra supports expandos.


 Note that even in the engines that support expandos, they will probably
 not survive a structured clone. I just tried in Chrome and they get
 stripped off. This further limits their utility in today's Web.
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss




-- 
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread David Herman
On Aug 30, 2013, at 9:39 AM, Allen Wirfs-Brock al...@wirfs-brock.com wrote:

 I think the right-way to think about structs is as an record structure with 
 no properties fixed behavior provided by a wrapper.  Very similar to the ES 
 primitives except that structs can be mutable.  The way to associate 
 properties with structs is to encapsulate them in an object, preferably via a 
 class definition. If we go that route we can reach the point where ES classes 
 have fixed-shape internal state defined as-if by a struct.

I might give a slightly different angle on this, and describe structs as 
objects with a fixed template for their own properties. They are still objects, 
they still inherit from prototypes. But they have a predefined set of own 
properties.

 Typed Arrays are a different beast that already exist in the real world.  I 
 don't see any need for consistency between Typed Arrays and struct types. 
 Consistency between Typed Arrays and Array is more important.

Mostly agreed, except I'd just refine that to say there's no need for 
consistency *in this dimension*. It would be a shame if typed arrays weren't 
generalized by the typed objects API in general, and I worked hard to make the 
pieces fit together. That nuance aside, the fact that, in practice, arrays are 
patched with additional properties (in fact, IIRC the ES6 template strings API 
adds properties to arrays) suggests that non-extensibility would be a real 
incompatibility between arrays and typed arrays. So I'm cool with making typed 
arrays -- but not structs -- extensible.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Oliver Hunt

On Aug 30, 2013, at 10:13 AM, David Herman dher...@mozilla.com wrote:

 On Aug 30, 2013, at 9:39 AM, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 
 I think the right-way to think about structs is as an record structure with 
 no properties fixed behavior provided by a wrapper.  Very similar to the 
 ES primitives except that structs can be mutable.  The way to associate 
 properties with structs is to encapsulate them in an object, preferably via 
 a class definition. If we go that route we can reach the point where ES 
 classes have fixed-shape internal state defined as-if by a struct.
 
 I might give a slightly different angle on this, and describe structs as 
 objects with a fixed template for their own properties. They are still 
 objects, they still inherit from prototypes. But they have a predefined set 
 of own properties.
 
 Typed Arrays are a different beast that already exist in the real world.  I 
 don't see any need for consistency between Typed Arrays and struct types. 
 Consistency between Typed Arrays and Array is more important.
 
 Mostly agreed, except I'd just refine that to say there's no need for 
 consistency *in this dimension*. It would be a shame if typed arrays weren't 
 generalized by the typed objects API in general, and I worked hard to make 
 the pieces fit together. That nuance aside, the fact that, in practice, 
 arrays are patched with additional properties (in fact, IIRC the ES6 template 
 strings API adds properties to arrays) suggests that non-extensibility would 
 be a real incompatibility between arrays and typed arrays. So I'm cool with 
 making typed arrays -- but not structs -- extensible.

I think of TypedArrays as being Arrays of structs with a fixed type/shape - the 
Array itself is a regular object with regular property characteristics, whereas 
the individual elements are all value types .

For example, say i have a struct type S, and make a regular Array filled with 
S.  Aside from the poor performance, this is now essentially what a typed array 
of structs is.  What is the reason for making the fast version of an array of 
structs lose the features of a regular array filled with structs?

--Oliver

 
 Dave
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Filip Pizlo

On Aug 30, 2013, at 9:28 AM, Brendan Eich bren...@mozilla.com wrote:

 Hi,
 Filip Pizlo mailto:fpi...@apple.com
 August 28, 2013 11:01 PM
 Here's the part that gets me, though: what is the value of disallowing named 
 properties on typed arrays?  Who does this help?
 
 You've heard about symmetry with struct types (ES6), right? Those do not want 
 expandos. We could break symmetry but at some cost. Too small to worry about? 
 Outweighed by benefits?

It's a fair point.  I don't see where it would break semantics but I'll try to 
do a thought experiment to see if it makes things confusing or inconvenient to 
the programmer.  Whether or not I care depends on the answers to the following 
questions:

1) Is the purpose to simplify programming by allowing you to add static typing?
2) Are we trying to help JITs?
3) Do we just want a sensible way of mapping to binary data?  (For both DOM and 
C-to-JS compilers)

It appears that (1) is a non-goal; if it was a goal then we'd have a different 
aliasing story, we wouldn't have the byteOffset/byteLength/buffer properties, 
and there would be zero discussion of binary layout.  We'd also bake the types 
deeper into the language.  This doesn't simplify programming if you have to 
write code in a bifurcated world with both traditional JS objects (all dynamic, 
objects can point at each other, but the backing stores of objects don't alias 
each other) and binary objects (have some types to describe layout, but can't 
have arbitrary object graphs, and backing stores of distinct objects may alias 
each other).

(2) appears to be a bit more of a pie-in-the-sky dream than a goal.  A decent 
JIT will already recognize idioms where the programmer created an object with a 
clear sequence of fields and then uses that object in a monomorphic way.  Both 
'function Constructor() { this.a = ...; this.b = ...; }' and '{a:..., b:...}' 
will get recognized, though some combination of run-time and compile-time 
analysis, as indicating that the user intends to have a type that has 'a' and 
'b' as fields.  It's true that binary data makes this explicit, but the JIT can 
fall apart in the same way as it does already for normal objects: the 
references to these objects tend to be untyped so the programmer can 
inadvertently introduce polymorphism and lose some (most?) of the benefits.  
Because binary data objects will have potentially aliased backing stores, you 
get the additional problem that you can't do any field-based aliasing analysis: 
for a normal JS object if I know that 'o.a' accesses own-property 'a' and it's 
not a getter/setter; and 'o.b' accesses own-property 'b' and it's not a 
getter/setter - then I know that these two accesses don't alias.  For binary 
data, I don't quite have such a guarantee: 'a' can overlap 'b' in some other 
object.  Also, the fact that a struct type instance might have to know about a 
buffer along with an offset into that buffer introduces a greater object size 
overhead than plain JS objects.  A plain JS object needs roughly two pieces of 
overhead: something to identify the type and a pointer reserved for when you 
store more things into it.  A struct type instance will need roughly three 
pieces of overhead: something to identify the type, a pointer to the buffer, 
and some indication of the offset within that buffer.  The only performance win 
from struct types is probably that it gives you an explicit tuple flattening.  
That's kind of cool but I remember that C# had struct types while Java didn't 
and yet JVMs still killed .NET on any meaningful measure of performance.

So it appears that the most realistic goal is (3).  In that case, I can't 
imagine a case where arrays being expandos but struct types being totally 
frozen will make the task of struct mapping to native code any harder.  If 
you're a programmer who doesn't want a typed array to have custom properties, 
then you won't give it custom properties - simple as that.  No need to enforce 
the invariant.

 
 Sfink's point about structured clone is good, except he wrote structured 
 clone and then angels cried... tears of blood.
 
 I don't quite buy that this helps users; most of the objects in your program 
 are going to allow custom properties to be added at any point.  That's kind 
 of the whole point of programming in a dynamic language.  So having one type 
 where it's disallowed doesn't help to clarify thinking.
 
 There are other such types a-coming :-).

And I'll be grumpy about some of those, too. ;-)

 
 I also don't buy that it makes anything more efficient.  We only incur 
 overhead from named properties if you actually add named properties to a 
 typed array, and in that case we incur roughly the overhead you'd expect 
 (those named properties are a touch slower than named properties on normal 
 objects, and you obviously need to allocate some extra space to store those 
 named properties).
 
 
 Honest q: couldn't you squeeze one more word out if JSC typed arrays were 
 non

Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Brendan Eich

David Herman wrote:

  Typed Arrays are a different beast that already exist in the real world.  I 
don't see any need for consistency between Typed Arrays and struct types. 
Consistency between Typed Arrays and Array is more important.


Mostly agreed, except I'd just refine that to say there's no need for 
consistency*in this dimension*. It would be a shame if typed arrays weren't 
generalized by the typed objects API in general, and I worked hard to make the 
pieces fit together. That nuance aside,


I think you are too kind :-|.

Allen, the point about typed arrays being different from structs because 
some implementations make the former extensible and the latter do not 
exist in any implementation yet is a just-so story, half of which is 
hypothetical! I could just as well argue from Firefox's non-extensible 
precedent if I wanted to.


The better argument is one that accounts for *why* structs are not 
extensible and how typed arrays differ, if they do differ, by design -- 
not based on implementation in some but not all browsers.



  the fact that, in practice, arrays are patched with additional properties (in 
fact, IIRC the ES6 template strings API adds properties to arrays) suggests 
that non-extensibility would be a real incompatibility between arrays and typed 
arrays.


This only goes so far, since one could try to argue from objects to 
structs in the same way. What's the difference-in-kind reason? I can 
give answers but I'm looking for others' answers.



  So I'm cool with making typed arrays -- but not structs -- extensible.


It's ok if we decide this, but let's have a not-just-so story beyond the 
array-like use-case one -- it's good but without something else, it 
could reason from objects to structs, but no one here wants extensible 
structs. (Right?)


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Brendan Eich
Thanks for the reply, I'll let SM and V8 peeps speak for themselves 
(they retired my SM number ;-).

Filip Pizlo mailto:fpi...@apple.com
August 30, 2013 10:41 AM

On Aug 30, 2013, at 9:28 AM, Brendan Eich bren...@mozilla.com 
mailto:bren...@mozilla.com wrote:



Hi,

Filip Pizlo mailto:fpi...@apple.com
August 28, 2013 11:01 PM
Here's the part that gets me, though: what is the value of 
disallowing named properties on typed arrays?  Who does this help?


You've heard about symmetry with struct types (ES6), right? Those do 
not want expandos. We could break symmetry but at some cost. Too 
small to worry about? Outweighed by benefits?


It's a fair point.  I don't see where it would break semantics but 
I'll try to do a thought experiment to see if it makes things 
confusing or inconvenient to the programmer.  Whether or not I care 
depends on the answers to the following questions:


1) Is the purpose to simplify programming by allowing you to add 
static typing?


No, we put a stake through that cold heart.


2) Are we trying to help JITs?


Yes, I think so (SM retirement makes this easy for me to say ;-). Even 
excluding type inference as done in SpiderMonkey, just using PICs, 
structs over against objects can help JITs avoid boxing values, same as 
typed arrays do compared to Arrays.


Sometimes you want a product of different types, not a vector of 
same-typed elements. Typed arrays were designed so you would alias two 
views, crazypants. Structs put on sanepants. Just making sure the 
use-case has clear motivation here.


If so, then the JIT wins implemented today among multiple engines for 
typed array element loads and stores will almost certainly be wanted for 
struct field traffic too.


3) Do we just want a sensible way of mapping to binary data?  (For 
both DOM and C-to-JS compilers)


Yes, and don't forget the GPU as well (DOM doesn't take that in).

/be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Filip Pizlo

On Aug 30, 2013, at 12:31 PM, Brendan Eich bren...@mozilla.com wrote:

 Thanks for the reply, I'll let SM and V8 peeps speak for themselves (they 
 retired my SM number ;-).
 Filip Pizlo mailto:fpi...@apple.com
 August 30, 2013 10:41 AM
 
 On Aug 30, 2013, at 9:28 AM, Brendan Eich bren...@mozilla.com 
 mailto:bren...@mozilla.com wrote:
 
 Hi,
 Filip Pizlo mailto:fpi...@apple.com
 August 28, 2013 11:01 PM
 Here's the part that gets me, though: what is the value of disallowing 
 named properties on typed arrays?  Who does this help?
 
 You've heard about symmetry with struct types (ES6), right? Those do not 
 want expandos. We could break symmetry but at some cost. Too small to worry 
 about? Outweighed by benefits?
 
 It's a fair point.  I don't see where it would break semantics but I'll try 
 to do a thought experiment to see if it makes things confusing or 
 inconvenient to the programmer.  Whether or not I care depends on the 
 answers to the following questions:
 
 1) Is the purpose to simplify programming by allowing you to add static 
 typing?
 
 No, we put a stake through that cold heart.
 
 2) Are we trying to help JITs?
 
 Yes, I think so (SM retirement makes this easy for me to say ;-). Even 
 excluding type inference as done in SpiderMonkey, just using PICs, structs 
 over against objects can help JITs avoid boxing values, same as typed arrays 
 do compared to Arrays.

This isn't really a win, at least not for us, anyway.  We don't box values in 
the sense of allocating stuff in the heap; we only tag them.  The tagging 
operations are just too darn cheap to worry about getting rid of them.  For 
example, int untagging is basically free.  Double untagging is not quite free 
but our double array inference (for normal JS arrays) is too darn good - you'd 
have to try quite hard to find a case where using a Float64Array gives you a 
real win over a JS array into which you only stored doubles.  Once exception is 
that our double array inference for normal JS arrays fails if you store NaN.  
Our current philosophy towards that is oh well - it's not clear that this 
arises enough that we should care.

 
 Sometimes you want a product of different types, not a vector of same-typed 
 elements. Typed arrays were designed so you would alias two views, 
 crazypants. Structs put on sanepants. Just making sure the use-case has clear 
 motivation here.

OK - by sanepants do you mean that there is no weirdo aliasing?  Going back 
to my example of field 'a' aliasing field 'b' - is it possible?

 
 If so, then the JIT wins implemented today among multiple engines for typed 
 array element loads and stores will almost certainly be wanted for struct 
 field traffic too.

I think you're being too generous to the typed array optimizations.  Vanilla JS 
arrays are catching up, or have already caught up and surpassed, depending on 
how you look at it.

It sure is tempting to add type thingies to help JITs but I think we're quickly 
approaching a world where adding primitive types to JS will be a bit like 
marking your Java methods final in the mistaken belief that it will unlock some 
extra devirtualization, or marking your variables 'register' in C thinking that 
this will make your code sper fast.

 
 3) Do we just want a sensible way of mapping to binary data?  (For both DOM 
 and C-to-JS compilers)
 
 Yes, and don't forget the GPU as well (DOM doesn't take that in).

Right!  I totally buy the native mapping story for struct types.  I just don't 
buy the performance story. ;-)

-Filip

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Brendan Eich

Filip Pizlo wrote:
Sometimes you want a product of different types, not a vector of 
same-typed elements. Typed arrays were designed so you would alias 
two views, crazypants. Structs put on sanepants. Just making sure the 
use-case has clear motivation here.


OK - by sanepants do you mean that there is no weirdo aliasing? 
 Going back to my example of field 'a' aliasing field 'b' - is it 
possible?


Summoning dherman here, but yes: sanepants in my book means no aliasing 
-- not just no aliasing required, no aliasing possible.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Brendan Eich

Hi,

Filip Pizlo mailto:fpi...@apple.com
August 28, 2013 11:01 PM
Here's the part that gets me, though: what is the value of disallowing 
named properties on typed arrays?  Who does this help?


You've heard about symmetry with struct types (ES6), right? Those do not 
want expandos. We could break symmetry but at some cost. Too small to 
worry about? Outweighed by benefits?


Sfink's point about structured clone is good, except he wrote 
structured clone and then angels cried... tears of blood.


I don't quite buy that this helps users; most of the objects in your 
program are going to allow custom properties to be added at any point. 
 That's kind of the whole point of programming in a dynamic language. 
 So having one type where it's disallowed doesn't help to clarify 
thinking.


There are other such types a-coming :-).


I also don't buy that it makes anything more efficient.  We only incur 
overhead from named properties if you actually add named properties to 
a typed array, and in that case we incur roughly the overhead you'd 
expect (those named properties are a touch slower than named 
properties on normal objects, and you obviously need to allocate some 
extra space to store those named properties).




Honest q: couldn't you squeeze one more word out if JSC typed arrays 
were non-extensible?


/be


-Filip





___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss
Oliver Hunt mailto:oli...@apple.com
August 27, 2013 9:35 AM
Existing types with magic index properties (other than Array) just 
drop numeric expandos on the floor so it's logically a no-op. Unless 
there was a numeric accessor on the prototype (which non-extensibility 
does not save you from).


My complaint is that this appears to be removing functionality that 
has been present in the majority of shipping TA implementations, 
assuming from LH's comment that Chakra supports expandos.


--Oliver


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

Domenic Denicola mailto:dome...@domenicdenicola.com
August 27, 2013 9:26 AM
I am not aware of all the nuances of the discussion, but as a 
developer I would find the behavior for numeric expandos confusing. 
For a typed array of length 1024, setting `ta[1023]` would do 
something completely different from setting `ta[1024]`. Unlike normal 
arrays, setting `ta[1024]` would not change `ta.length`, and 
presumably `ta[1024]` would not be exposed by the various iteration 
facilities.


I would much rather received a loud error (in strict mode), which will 
either alert me to my code being weird, or possibly to my code 
committing an off-by-one error.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

Oliver Hunt mailto:oli...@apple.com
August 27, 2013 9:18 AM
The curent argument for non-extensibility seems to be mozilla doesn't 
support them.  It sounds like all other engines do.


There are plenty of reasons developers may want expandos - they're 
generally useful for holding different kinds of metadata.  By 
requiring a separate object to hold that information we're merely 
making a developer's life harder.  This is also inconsistent with all 
other magically-indexable types in ES and the DOM.


I'm also not sure what the performance gains of inextensibility are, 
if DH could expand it would be greatly appreciated.


--Oliver



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss
Allen Wirfs-Brock mailto:al...@wirfs-brock.com
August 27, 2013 9:04 AM
see meeting notes 
https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-07/july-24.md#54-are-typedarray-insances-born-non-extensible 





___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread David Herman
On Aug 30, 2013, at 12:46 PM, Filip Pizlo fpi...@apple.com wrote:

 OK - by sanepants do you mean that there is no weirdo aliasing?  Going back 
 to my example of field 'a' aliasing field 'b' - is it possible?

There is plenty of aliasing possible, but I'm trying to understand what you 
mean specifically by weirdo aliasing. Do you mean that in a given struct it's 
impossible for it to have two fields that alias each other? That's definitely 
true. E.g., if I have a struct type

var T = new StructType({ a: t1, b: t2, ... });

then for any given instance x of T, I know for sure that x.a and x.b do not 
alias the same storage.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread David Herman
On Aug 30, 2013, at 3:46 PM, David Herman dher...@mozilla.com wrote:

 E.g., if I have a struct type
 
var T = new StructType({ a: t1, b: t2, ... });
 
 then for any given instance x of T, I know for sure that x.a and x.b do not 
 alias the same storage.

(Except, of course, if t1 and t2 are pointer types like Object.)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Filip Pizlo

On Aug 30, 2013, at 3:46 PM, David Herman dher...@mozilla.com wrote:

 On Aug 30, 2013, at 12:46 PM, Filip Pizlo fpi...@apple.com wrote:
 
 OK - by sanepants do you mean that there is no weirdo aliasing?  Going 
 back to my example of field 'a' aliasing field 'b' - is it possible?
 
 There is plenty of aliasing possible, but I'm trying to understand what you 
 mean specifically by weirdo aliasing. Do you mean that in a given struct 
 it's impossible for it to have two fields that alias each other? That's 
 definitely true. E.g., if I have a struct type
 
var T = new StructType({ a: t1, b: t2, ... });
 
 then for any given instance x of T, I know for sure that x.a and x.b do not 
 alias the same storage.

Yup, that's what I was concerned about.  And reading over the spec I agree.  
But just for sanity, we're guaranteeing this because you cannot create a struct 
type instance by pointing into an arbitrary offset of a buffer - you can only 
instantiate new ones, or alias structs nested as fields in other structs.  
Right?

-Filip


 
 Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread David Herman
On Aug 30, 2013, at 3:54 PM, Filip Pizlo fpi...@apple.com wrote:

 Yup, that's what I was concerned about.  And reading over the spec I agree.  
 But just for sanity, we're guaranteeing this because you cannot create a 
 struct type instance by pointing into an arbitrary offset of a buffer - you 
 can only instantiate new ones, or alias structs nested as fields in other 
 structs.  Right?

Hm, I must be missing something obvious, but I don't see why you'd need that 
restriction to guarantee this. A struct type with two different fields 
guarantees they're at different offsets from the base:

var T = new StructType({
a: int32, // offset 0
b: int32  // offset 4
});

so even if I point an instance of T into the middle of a struct, x.a and x.b 
must be at different offsets.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread David Herman
On Aug 30, 2013, at 4:20 PM, Filip Pizlo fpi...@apple.com wrote:

 This is the kind of weirdness that I hope struct types *don't* have, if their 
 alleged purpose is to help people optimize their code.

This is a great point, thanks. On the one hand, I concluded long ago the 
exposure of the buffer seems like something we can't really avoid, since it's 
necessary for WebGL arrays-of-structs, which is a main use case. On the other 
hand, that doesn't necessarily mean we need the ability to overlay a struct 
type into random points in a buffer. We'd have to do this carefully, though: I 
believe we'd have to restrict overlaying to just the legacy typed array 
constructors, not to any new kinds of array types (since they may have structs 
nested inside them), in order to guarantee lack of aliasing. And then we'd want 
to make sure this covered the WebGL use cases.

 Now, I don't object to typed arrays having this behavior - it is what it is, 
 and it's certainly useful for doing graphics type stuff.  It's also 
 indispensable for emscripten.  And I'm OK with struct types also having this 
 behavior; in fact I would *expect them to have such behavior* if they're 
 supposed by help C-to-JS code generators or the like.

Not really for C-to-JS, no. I do want them to be useful for e.g. Java-to-JS 
code generators, but those shouldn't need the casting.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-30 Thread Filip Pizlo

On Aug 30, 2013, at 4:42 PM, David Herman dher...@mozilla.com wrote:

 On Aug 30, 2013, at 4:20 PM, Filip Pizlo fpi...@apple.com wrote:
 
 This is the kind of weirdness that I hope struct types *don't* have, if 
 their alleged purpose is to help people optimize their code.
 
 This is a great point, thanks. On the one hand, I concluded long ago the 
 exposure of the buffer seems like something we can't really avoid, since it's 
 necessary for WebGL arrays-of-structs, which is a main use case. On the other 
 hand, that doesn't necessarily mean we need the ability to overlay a struct 
 type into random points in a buffer. We'd have to do this carefully, though: 
 I believe we'd have to restrict overlaying to just the legacy typed array 
 constructors, not to any new kinds of array types (since they may have 
 structs nested inside them), in order to guarantee lack of aliasing. And then 
 we'd want to make sure this covered the WebGL use cases.

I think it's better if you pick one use case and get it totally right.  You're 
not going to get the optimize my JS code with types use case right.  So stick 
to the binary data mapping use case, and allow arbitrary aliasing.

-F


 
 Now, I don't object to typed arrays having this behavior - it is what it is, 
 and it's certainly useful for doing graphics type stuff.  It's also 
 indispensable for emscripten.  And I'm OK with struct types also having this 
 behavior; in fact I would *expect them to have such behavior* if they're 
 supposed by help C-to-JS code generators or the like.
 
 Not really for C-to-JS, no. I do want them to be useful for e.g. Java-to-JS 
 code generators, but those shouldn't need the casting.
 
 Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-29 Thread Filip Pizlo
Here's the part that gets me, though: what is the value of disallowing named 
properties on typed arrays?  Who does this help?

I don't quite buy that this helps users; most of the objects in your program 
are going to allow custom properties to be added at any point.  That's kind of 
the whole point of programming in a dynamic language.  So having one type where 
it's disallowed doesn't help to clarify thinking.

I also don't buy that it makes anything more efficient.  We only incur overhead 
from named properties if you actually add named properties to a typed array, 
and in that case we incur roughly the overhead you'd expect (those named 
properties are a touch slower than named properties on normal objects, and you 
obviously need to allocate some extra space to store those named properties).

-Filip



On Aug 28, 2013, at 10:52 PM, Steve Fink sph...@gmail.com wrote:

 On 08/27/2013 09:35 AM, Oliver Hunt wrote:
 My complaint is that this appears to be removing functionality that has been 
 present in the majority of shipping TA implementations, assuming from LH's 
 comment that Chakra supports expandos.
 
 Note that even in the engines that support expandos, they will probably
 not survive a structured clone. I just tried in Chrome and they get
 stripped off. This further limits their utility in today's Web.
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-28 Thread Steve Fink
On 08/27/2013 09:35 AM, Oliver Hunt wrote:
 My complaint is that this appears to be removing functionality that has been 
 present in the majority of shipping TA implementations, assuming from LH's 
 comment that Chakra supports expandos.

Note that even in the engines that support expandos, they will probably
not survive a structured clone. I just tried in Chrome and they get
stripped off. This further limits their utility in today's Web.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Allen Wirfs-Brock
see meeting notes 
https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-07/july-24.md#54-are-typedarray-insances-born-non-extensible
 

On Aug 26, 2013, at 5:34 PM, Oliver Hunt wrote:

 So I noticed in the last revision to the TA spec that a decision was made to 
 prevent extensibility of them.  Can someone say why that decision was made? 
 It makes TAs somewhat unique vs. all other builtin types and doesn't match 
 the behavior of Blink or WebKit implementations.
 
 While I am usually in favour of conservative behaviour I'd like more 
 information on the reasoning behind this choice.
 
 --Oliver
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Oliver Hunt
The curent argument for non-extensibility seems to be mozilla doesn't support 
them.  It sounds like all other engines do.

There are plenty of reasons developers may want expandos - they're generally 
useful for holding different kinds of metadata.  By requiring a separate object 
to hold that information we're merely making a developer's life harder.  This 
is also inconsistent with all other magically-indexable types in ES and the DOM.

I'm also not sure what the performance gains of inextensibility are, if DH 
could expand it would be greatly appreciated.

--Oliver
 

On Aug 27, 2013, at 9:04 AM, Allen Wirfs-Brock al...@wirfs-brock.com wrote:

 see meeting notes 
 https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-07/july-24.md#54-are-typedarray-insances-born-non-extensible
  
 
 On Aug 26, 2013, at 5:34 PM, Oliver Hunt wrote:
 
 So I noticed in the last revision to the TA spec that a decision was made to 
 prevent extensibility of them.  Can someone say why that decision was made? 
 It makes TAs somewhat unique vs. all other builtin types and doesn't match 
 the behavior of Blink or WebKit implementations.
 
 While I am usually in favour of conservative behaviour I'd like more 
 information on the reasoning behind this choice.
 
 --Oliver
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Non-extensibility of Typed Arrays

2013-08-27 Thread Domenic Denicola
I am not aware of all the nuances of the discussion, but as a developer I would 
find the behavior for numeric expandos confusing. For a typed array of length 
1024, setting `ta[1023]` would do something completely different from setting 
`ta[1024]`. Unlike normal arrays, setting `ta[1024]` would not change 
`ta.length`, and presumably `ta[1024]` would not be exposed by the various 
iteration facilities.

I would much rather received a loud error (in strict mode), which will either 
alert me to my code being weird, or possibly to my code committing an 
off-by-one error.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Oliver Hunt
Existing types with magic index properties (other than Array) just drop numeric 
expandos on the floor so it's logically a no-op.  Unless there was a numeric 
accessor on the prototype (which non-extensibility does not save you from).

My complaint is that this appears to be removing functionality that has been 
present in the majority of shipping TA implementations, assuming from LH's 
comment that Chakra supports expandos.

--Oliver

On Aug 27, 2013, at 9:26 AM, Domenic Denicola dome...@domenicdenicola.com 
wrote:

 I am not aware of all the nuances of the discussion, but as a developer I 
 would find the behavior for numeric expandos confusing. For a typed array of 
 length 1024, setting `ta[1023]` would do something completely different from 
 setting `ta[1024]`. Unlike normal arrays, setting `ta[1024]` would not change 
 `ta.length`, and presumably `ta[1024]` would not be exposed by the various 
 iteration facilities.
 
 I would much rather received a loud error (in strict mode), which will either 
 alert me to my code being weird, or possibly to my code committing an 
 off-by-one error.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Brendan Eich
On Aug 27, 2013, at 9:35 AM, Oliver Hunt oli...@apple.com wrote:

 Existing types with magic index properties (other than Array) just drop 
 numeric expandos on the floor so it's logically a no-op.  Unless there was a 
 numeric accessor on the prototype (which non-extensibility does not save you 
 from).

Those are a problem and an anti-use-case.


 My complaint is that this appears to be removing functionality that has been 
 present in the majority of shipping TA implementations, assuming from LH's 
 comment that Chakra supports expandos

Does anyone care, though?

TA instances having no indexed expandos but allowing named ones is weird. 
Better to be consistent to users and help implementations optimize further.

/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Allen Wirfs-Brock

On Aug 27, 2013, at 9:26 AM, Domenic Denicola wrote:

 I am not aware of all the nuances of the discussion, but as a developer I 
 would find the behavior for numeric expandos confusing. For a typed array of 
 length 1024, setting `ta[1023]` would do something completely different from 
 setting `ta[1024]`. Unlike normal arrays, setting `ta[1024]` would not change 
 `ta.length`, and presumably `ta[1024]` would not be exposed by the various 
 iteration facilities.
 
 I would much rather received a loud error (in strict mode), which will either 
 alert me to my code being weird, or possibly to my code committing an 
 off-by-one error.

Integer numeric expandos on TypedArrays (eg, outside the range 0..length-1) are 
disallowed by the ES6 spec. in a manner that is independent of 
the[[Extensible]] internal property.  The discussion at the meeting was about 
non-numeric expandos such as 'foo'.

Allen


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Filip Pizlo


 On Aug 27, 2013, at 9:39 AM, Brendan Eich bren...@mozilla.com wrote:
 
 On Aug 27, 2013, at 9:35 AM, Oliver Hunt oli...@apple.com wrote:
 
 Existing types with magic index properties (other than Array) just drop 
 numeric expandos on the floor so it's logically a no-op.  Unless there was a 
 numeric accessor on the prototype (which non-extensibility does not save you 
 from).
 
 Those are a problem and an anti-use-case.

But they won't change anytime soon, will they?

So being inconsistent is weird. 

 
 
 My complaint is that this appears to be removing functionality that has been 
 present in the majority of shipping TA implementations, assuming from LH's 
 comment that Chakra supports expandos
 
 Does anyone care, though?

I do. Placing named properties on arrays makes sense. Consider a matrix 
implemented as a Float32Array, with named properties telling you the numRows 
and numCols. Just one example. 

 
 TA instances having no indexed expandos but allowing named ones is weird. 
 Better to be consistent to users

Consistency would imply doing what other indexed types do. 

 and help implementations optimize further.

I'm not convinced by this. We support named properties and our typed arrays are 
pretty well optimized in space (three words of overhead) and time (everything 
gets inlined including allocation). If there is some amazing optimization that 
non-expansion gives you, and it's so important that the spec needs to account 
for it, then I'd love to hear what that is. 

 
 /be
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Non-extensibility of Typed Arrays

2013-08-27 Thread Domenic Denicola
 Integer numeric expandos on TypedArrays (eg, outside the range 0..length-1) 
 are disallowed by the ES6 spec. in a manner that is independent of 
 the[[Extensible]] internal property.  The discussion at the meeting was about 
 non-numeric expandos such as 'foo'.

Oh. That's just weird O_o.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Allen Wirfs-Brock

On Aug 27, 2013, at 9:43 AM, Allen Wirfs-Brock wrote:

 
 On Aug 27, 2013, at 9:26 AM, Domenic Denicola wrote:
 
 I am not aware of all the nuances of the discussion, but as a developer I 
 would find the behavior for numeric expandos confusing. For a typed array of 
 length 1024, setting `ta[1023]` would do something completely different from 
 setting `ta[1024]`. Unlike normal arrays, setting `ta[1024]` would not 
 change `ta.length`, and presumably `ta[1024]` would not be exposed by the 
 various iteration facilities.
 
 I would much rather received a loud error (in strict mode), which will 
 either alert me to my code being weird, or possibly to my code committing an 
 off-by-one error.
 
 Integer numeric expandos on TypedArrays (eg, outside the range 0..length-1) 
 are disallowed by the ES6 spec. in a manner that is independent of 
 the[[Extensible]] internal property.  The discussion at the meeting was about 
 non-numeric expandos such as 'foo'.

To clarify, out of range [[Get]] returns undefined and out of range [[Put]] is 
either a noop or a throw depending upon the strictness of code doing the 
[[Put]] (ie, normal [[Put]] strict behavior)





 
 Allen
 
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Mark S. Miller
On Tue, Aug 27, 2013 at 9:35 AM, Oliver Hunt oli...@apple.com wrote:

 Existing types with magic index properties (other than Array) just drop
 numeric expandos on the floor so it's logically a no-op.


Dropping assignments silently is a bug, as it allows code to innocently
proceed on control flow paths that assume success. That's why strict-mode
turned failed assignments into thrown errors.



  Unless there was a numeric accessor on the prototype (which
 non-extensibility does not save you from).

 My complaint is that this appears to be removing functionality that has
 been present in the majority of shipping TA implementations, assuming from
 LH's comment that Chakra supports expandos.


majority is not a relevant constraint. We should try to make the best
decision we can that is compatible with the cross browser web. If all the
major browsers already agreed on one behavior, then we should only consider
deviating from it with great caution. But so long as the major browsers
differ, we need only feel constrained by compatibility with their
intersection. This principle even overrides compatibility with previous
versions of our own spec, as just discussed re [[Invoke]].





 --Oliver

 On Aug 27, 2013, at 9:26 AM, Domenic Denicola dome...@domenicdenicola.com
 wrote:

  I am not aware of all the nuances of the discussion, but as a developer
 I would find the behavior for numeric expandos confusing. For a typed array
 of length 1024, setting `ta[1023]` would do something completely different
 from setting `ta[1024]`. Unlike normal arrays, setting `ta[1024]` would not
 change `ta.length`, and presumably `ta[1024]` would not be exposed by the
 various iteration facilities.
 
  I would much rather received a loud error (in strict mode), which will
 either alert me to my code being weird, or possibly to my code committing
 an off-by-one error.

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss




-- 
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread David Herman
On Aug 27, 2013, at 9:47 AM, Filip Pizlo fpi...@apple.com wrote:

 I do. Placing named properties on arrays makes sense. Consider a matrix 
 implemented as a Float32Array, with named properties telling you the numRows 
 and numCols. Just one example. 

There are of course other ways to achieve this that don't involve patching the 
array object, such as building a data abstraction for matrices that has-a 
Float32Array, or creating a new array type with additional methods:

var Matrix = new ArrayType(float32);
Matrix.prototype.numRows = function() { ... }
// or
Object.defineProperty(Matrix.prototype, { get: function() { ... }, ... });

 TA instances having no indexed expandos but allowing named ones is weird. 
 Better to be consistent to users
 
 Consistency would imply doing what other indexed types do. 

Consistency arguments won't get you very far. The indexed properties of typed 
arrays by design act very differently from other indexed types. That's their 
whole reason for existence.

And the other consistency dimension is between array types and struct types. Is 
anyone arguing that structs should also have expandos?

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread K. Gadd
To me the compelling argument against using encapsulation instead of
extensibility is that it breaks compatibility with existing JS code. Once
you encapsulate an array, the encapsulated object no longer acts like an
array and you can't use it in contexts where a normal array is expected.
The ability to do python style 'quacks like an array' duck typing simply
doesn't exist for arrays in JS.

This is a huge problem for JSIL interop - I can't preserve type information
for arrays, or expose other array features, without either breaking interop
with pure JS or otherwise eating some enormous perf hit (proxies,
spidermonkey's deopt from named slots on arrays, etc). Baking this
limitation into the spec for typed arrays is kind of awful, but I can
understand if it's absolutely necessary...

Maybe WeakMap is the right solution for this? I can't remember what the
performance consequences are for that use case. (Can you use an Array as a
weakmap key? I forget, since it's an object-like type but it has special
properties...)

Note that I'm not arguing for array subclassing here, just the ability to
'bless' an array instance with extra information. Such use cases are no
doubt fairly rare, even if it's possible to come up with a handful of them.

I assume StructType and ArrayType will address a lot of this, but I'm not
sure how I feel about having to wait for those features when (were typed
arrays specced to allow named expandos) you could do this stuff in a mostly
cross-browser way and ship it right now. (WeakMap fails this test since
IIRC it's still only available in Firefox. :/ I love it and wish I could
use it in the wild!)


On Tue, Aug 27, 2013 at 3:49 PM, David Herman dher...@mozilla.com wrote:

 On Aug 27, 2013, at 9:47 AM, Filip Pizlo fpi...@apple.com wrote:

  I do. Placing named properties on arrays makes sense. Consider a matrix
 implemented as a Float32Array, with named properties telling you the
 numRows and numCols. Just one example.

 There are of course other ways to achieve this that don't involve patching
 the array object, such as building a data abstraction for matrices that
 has-a Float32Array, or creating a new array type with additional methods:

 var Matrix = new ArrayType(float32);
 Matrix.prototype.numRows = function() { ... }
 // or
 Object.defineProperty(Matrix.prototype, { get: function() { ... }, ...
 });

  TA instances having no indexed expandos but allowing named ones is
 weird. Better to be consistent to users
 
  Consistency would imply doing what other indexed types do.

 Consistency arguments won't get you very far. The indexed properties of
 typed arrays by design act very differently from other indexed types.
 That's their whole reason for existence.

 And the other consistency dimension is between array types and struct
 types. Is anyone arguing that structs should also have expandos?

 Dave

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Allen Wirfs-Brock

On Aug 27, 2013, at 3:49 PM, David Herman wrote:

 On Aug 27, 2013, at 9:47 AM, Filip Pizlo fpi...@apple.com wrote:
 
 I do. Placing named properties on arrays makes sense. Consider a matrix 
 implemented as a Float32Array, with named properties telling you the numRows 
 and numCols. Just one example. 
 
 There are of course other ways to achieve this that don't involve patching 
 the array object, such as building a data abstraction for matrices that has-a 
 Float32Array, or creating a new array type with additional methods:
 
var Matrix = new ArrayType(float32);
Matrix.prototype.numRows = function() { ... }
// or
Object.defineProperty(Matrix.prototype, { get: function() { ... }, ... });


or even better:

  class Matrix extends Float32Array {
  get numRows() {...}
  ...
   }

although Matrix may be a bad example...

Subclasses of Typed Arrays get their own prototype that can add or over-ride 
inherited methods.  The instances of the subclass are still non-extensible 
according to the current spec. draft.

Allen



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Mark S. Miller
On Tue, Aug 27, 2013 at 4:14 PM, K. Gadd k...@luminance.org wrote:

 To me the compelling argument against using encapsulation instead of
 extensibility is that it breaks compatibility with existing JS code. Once
 you encapsulate an array, the encapsulated object no longer acts like an
 array and you can't use it in contexts where a normal array is expected.
 The ability to do python style 'quacks like an array' duck typing simply
 doesn't exist for arrays in JS.

 This is a huge problem for JSIL interop - I can't preserve type
 information for arrays, or expose other array features, without either
 breaking interop with pure JS or otherwise eating some enormous perf hit
 (proxies, spidermonkey's deopt from named slots on arrays, etc). Baking
 this limitation into the spec for typed arrays is kind of awful, but I can
 understand if it's absolutely necessary...

 Maybe WeakMap is the right solution for this? I can't remember what the
 performance consequences are for that use case. (Can you use an Array as a
 weakmap key?


Yes.



 I forget, since it's an object-like type but it has special properties...)


A weakmap key has an unforgeable per-act-of-creation identity, which is the
only requirement. Arrays pass. Strings fail. Interestingly, if we provide a
system-wide interning table from strings to symbols, then internable
symbols fail. Else, unique symbols pass, but have all the problems
previously enumerated.




 Note that I'm not arguing for array subclassing here, just the ability to
 'bless' an array instance with extra information. Such use cases are no
 doubt fairly rare, even if it's possible to come up with a handful of them.

 I assume StructType and ArrayType will address a lot of this, but I'm not
 sure how I feel about having to wait for those features when (were typed
 arrays specced to allow named expandos) you could do this stuff in a mostly
 cross-browser way and ship it right now. (WeakMap fails this test since
 IIRC it's still only available in Firefox. :/ I love it and wish I could
 use it in the wild!)


 On Tue, Aug 27, 2013 at 3:49 PM, David Herman dher...@mozilla.com wrote:

 On Aug 27, 2013, at 9:47 AM, Filip Pizlo fpi...@apple.com wrote:

  I do. Placing named properties on arrays makes sense. Consider a matrix
 implemented as a Float32Array, with named properties telling you the
 numRows and numCols. Just one example.

 There are of course other ways to achieve this that don't involve
 patching the array object, such as building a data abstraction for matrices
 that has-a Float32Array, or creating a new array type with additional
 methods:

 var Matrix = new ArrayType(float32);
 Matrix.prototype.numRows = function() { ... }
 // or
 Object.defineProperty(Matrix.prototype, { get: function() { ... },
 ... });

  TA instances having no indexed expandos but allowing named ones is
 weird. Better to be consistent to users
 
  Consistency would imply doing what other indexed types do.

 Consistency arguments won't get you very far. The indexed properties of
 typed arrays by design act very differently from other indexed types.
 That's their whole reason for existence.

 And the other consistency dimension is between array types and struct
 types. Is anyone arguing that structs should also have expandos?

 Dave

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss




-- 
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-08-27 Thread Oliver Hunt

On Aug 27, 2013, at 3:49 PM, David Herman dher...@mozilla.com wrote:

 On Aug 27, 2013, at 9:47 AM, Filip Pizlo fpi...@apple.com wrote:
 
 I do. Placing named properties on arrays makes sense. Consider a matrix 
 implemented as a Float32Array, with named properties telling you the numRows 
 and numCols. Just one example. 
 
 There are of course other ways to achieve this that don't involve patching 
 the array object, such as building a data abstraction for matrices that has-a 
 Float32Array, or creating a new array type with additional methods:
 
var Matrix = new ArrayType(float32);
Matrix.prototype.numRows = function() { ... }
// or
Object.defineProperty(Matrix.prototype, { get: function() { ... }, ... });

So what is the answer for jQuery like libraries that want to be able to add 
metadata?

It's possible (if you want) to preventExtensions() on any type, but you can't 
undo it.

 
 TA instances having no indexed expandos but allowing named ones is weird. 
 Better to be consistent to users
 
 Consistency would imply doing what other indexed types do. 
 
 Consistency arguments won't get you very far. The indexed properties of typed 
 arrays by design act very differently from other indexed types. That's their 
 whole reason for existence.
 
 And the other consistency dimension is between array types and struct types. 
 Is anyone arguing that structs should also have expandos?

No, but i would expect expandos to be possible on an Array of them.  The same 
argument being made in favor of preventExtensions() on TAs applies to all new 
types in ES6 -- why should i be able to add expandos to a Map or any other 
type?  (Map is particularly severe given the overloaded nature of [] in other 
languages and often correctish enough behavior of toString() in ES, e.g. 
m=new Map; m[someInt]=foo; … m[someInt])

--Oliver

 
 Dave
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Non-extensibility of Typed Arrays

2013-08-26 Thread Oliver Hunt
So I noticed in the last revision to the TA spec that a decision was made to 
prevent extensibility of them.  Can someone say why that decision was made? It 
makes TAs somewhat unique vs. all other builtin types and doesn't match the 
behavior of Blink or WebKit implementations.

While I am usually in favour of conservative behaviour I'd like more 
information on the reasoning behind this choice.

--Oliver
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Unifying Binary Data and Typed Arrays

2011-05-18 Thread Luke Hoban
At the March meeting Binary Data was promoted to a Harmony proposal, and it was 
agreed that it would need to be further rationalized with existing Typed Arrays 
in browsers, including at least being interoperable with ArrayBuffer.  With 
more reflection on this, I believe it is important that we embrace a single 
binary array model for JavaScript developers in the browser.  To do this 
effectively while adding the struct and reference functionality of the Binary 
Data proposal, it would be useful to pull the existing Typed Arrays objects 
into ES.next, and add on top of them the necessary functionality to support 
Binary Data.

We talked specifically about embracing ArrayBuffer and aligning the Binary Data 
Array objects with the ArrayBufferView interface.  This allows for basic 
interop between the existing Web APIs and Binary Data objects.  However, code 
is being written today using ArrayBuffer, Int32Array and related objects.  
Binary data allows constructing one's own Int32Array as new ArrayType(int32), 
but this will not replace the developer desire to use Int32Array directly.  To 
unify the two array models, it would make sense to also embrace the specific 
array objects that are currently in browsers.

I've updated the Binary Data 
Discussionhttp://wiki.ecmascript.org/doku.php?id=harmony:binary_data_discussion
 page with a note on the proposed extension which would incorporate the 
relevant objects into the proposal, and I've augmented the Typed 
Arrayshttp://wiki.ecmascript.org/doku.php?id=strawman:typed_arrays page with 
a writeup of the relevant Typed Arrays objects in ECMAScript spec language 
separated from the WebIDL indirection.  As part of the rationalization of 
Binary Data, I'd like to suggest that we embrace a single unified binary array 
story that embraces both the existing Typed Arrays functionality along with the 
block types, structs and references from Binary Data.

Luke
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Typed Arrays and Endianness

2010-05-24 Thread Kenneth Russell
On Thu, May 20, 2010 at 4:50 PM, Waldemar Horwat walde...@google.com wrote:
 Having unspecified endianness in the specification of TypedArrays seems like
 a show-stopper to me.  There are two alternatives here:

 - If you do any I/O, you want the endianness to be specified.  Sending data
 from one process to another with a different endianness would get you into
 trouble.

 - If you don't do any I/O, then the TypedArray is private to ECMAScript.  In
 this case there are programming techniques which will avoid the bulk of the
 cost of endian switching, so you don't gain much by leaving endianness
 unspecified in a language like ECMAScript.

Let's defer this discussion until after tomorrow's face-to-face
meeting. Use cases such as WebGL and audio APIs require the native
endianness of the platform to be used, even though data is sent from
ECMAScript to another library or process on the same machine. I'd also
like to find out what you mean about programming techniques to avoid
the cost of endianness switching.

-Ken
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Typed Arrays and Endianness

2010-05-20 Thread Waldemar Horwat

Having unspecified endianness in the specification of TypedArrays seems like a 
show-stopper to me.  There are two alternatives here:

- If you do any I/O, you want the endianness to be specified.  Sending data 
from one process to another with a different endianness would get you into 
trouble.

- If you don't do any I/O, then the TypedArray is private to ECMAScript.  In 
this case there are programming techniques which will avoid the bulk of the 
cost of endian switching, so you don't gain much by leaving endianness 
unspecified in a language like ECMAScript.

   Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss