> I believe (and any implementors, please correct me if I'm wrong) that the > most common approach is to guess the likely maximum size for the array, > double it, and then allocate that much memory - and thus all the array > methods can remain optimized until the memory size needs to double again.
They usually don't *guess*, they just start with some fixed number and increase the length until they have enough, even with . With some arrays, like that from `Array.prototype.map`, I think, for at least V8, they allocate > Is there any evidence that existing arrays *prevent* engines from making > optimizations? Not always *prevent*, but they do make it far more complicated in many cases. For example: https://bugs.chromium.org/p/v8/issues/detail?id=2229 They do block optimization in memory-constrained scenarios, though, because the fast path requires too much memory. For example, Moddable's XS and JerryScript won't be able to elide the check without doubling their code size for those builtins. Most of the blocked optimizations are that of memory size and allocation overhead. An implementation might choose to allocate the array of objects inline with the object itself, rather than as a separate pointer. You can only really do this with objects of fixed size. It's worth noting that if you start with room for 8 items and double on every allocation, 100 arrays of length 8 would end up allocating room for 800 entries, but 100 arrays of length 9 would end up with room for 1600 entries, not 900. And this is just the overhead of the object's entries, which itself has to be a pointer member on a parent object instance. So if there's a lot of these kinds of arrays, it adds up really fast. ----- Isiah Meadows [email protected] www.isiahmeadows.com On Sat, Dec 22, 2018 at 12:14 AM Jordan Harband <[email protected]> wrote: > > Is there any evidence that existing arrays *prevent* engines from making > optimizations? I believe (and any implementors, please correct me if I'm > wrong) that the most common approach is to guess the likely maximum size for > the array, double it, and then allocate that much memory - and thus all the > array methods can remain optimized until the memory size needs to double > again. > > On Fri, Dec 21, 2018 at 6:52 PM Isiah Meadows <[email protected]> wrote: >> >> Usually, the native resizable array is good enough. But sometimes, you >> don't want to resize it, just allocate a slab of memory for objects. >> Typed arrays already implement this for integers, but I find it quite >> often in certain applications that I want to do this for arbitrary >> objects, too. So how about a `FixedArray` analogue to `TypedArray` >> that implements most the same stuff `TypedArray` does, just without >> the binary data integration or numeric coercion? >> >> There's a few other benefits to having a native fixed-size untyped array: >> >> 1. Engines can make major assumptions about allocation and use that to >> improve performance and minimize allocations - notably, they can >> choose to allocate the array's entries in the object itself, since it >> can't change for the lifetime of that object. This is the norm in some >> languages, like Rust. >> 2. Engines can optimize things like `.map`, `.filter`, `.concat`, and >> similar far more easily. Those don't have to be generic, and since >> they can assume everything's contiguous, it's much easier to >> implement. >> 3. Engines can make guarantees about allocation size, and that's the >> whole point of this proposal. These objects only require a minimum of >> type info, GC info, its length, and `length` number of machine >> pointers. >> >> Of course, yes, normal arrays *could* provide most of these, but >> consider how V8 internally uses an `InternalArray` extensively to >> implement many JS built-ins in JS and its CodeStubAssembler DSL. It's >> also worth noting V8 didn't start inlining `Array.prototype.map` and >> friends for dense arrays until it figured out how to bail out of a >> JIT-compiled loop from within that loop with negligible overhead (it >> took an internal redesign), and no other engine I'm aware of performs >> this optimization, either - it's all because JS's resizable arrays can >> also be sparse. >> >> ----- >> >> Isiah Meadows >> [email protected] >> www.isiahmeadows.com >> _______________________________________________ >> es-discuss mailing list >> [email protected] >> https://mail.mozilla.org/listinfo/es-discuss _______________________________________________ es-discuss mailing list [email protected] https://mail.mozilla.org/listinfo/es-discuss

