Thank you Jakob. You are right indeed. I updated the code to use regular JS 
arrays [1], and garbage has gone:

V8:
> time ./d8 ./nbody_js_arrays.js -- 50000000 
-0.169075164
-0.169059907
./d8 ./nbody_js_arrays.js -- 50000000  *24.93s* user 0.06s system 99% cpu 
25.061 total

SpiderMonkey suffers with this approach though:
> time ./js ./nbody_js_arrays.js 50000000
-0.169075164
-0.169059907
./js ./nbody_js_arrays.js 50000000  *35.07s* user 0.09s system 98% cpu 
35.817 total

I guess my best bet would be to wait and see what's coming :).

Cheers,
Andrei


On Tuesday, April 23, 2013 1:27:14 AM UTC-7, Jakob Kummerow wrote:
>
> One more clarification: to avoid allocation of HeapNumbers (and therefore 
> GC) in existing V8 versions, it's not necessary to re-write the app using 
> Float64Arrays -- just using regular JavaScript Arrays is enough (e.g.: 
> "this.position = [x, y, z];"). As long as there are only numbers in the 
> array, V8 will detect this and store the numbers in unboxed form.
>
>
> On Tue, Apr 23, 2013 at 10:18 AM, Toon Verwaest 
> <[email protected]<javascript:>
> > wrote:
>
>> There's work in progress to solve this issue. Stay tuned ;)
>>
>>
>> On Tue, Apr 23, 2013 at 9:54 AM, Ben Noordhuis 
>> <[email protected]<javascript:>
>> > wrote:
>>
>>> On Mon, Apr 22, 2013 at 11:37 PM, Andrei Kashcha 
>>> <[email protected]<javascript:>> 
>>> wrote:
>>> > Recently I've been profiling a lot v8's garbage collection. Surprising 
>>> truth
>>> > is it's really slow when JS program does heavy computation. For 
>>> example,
>>> > consider a straightforward n-body computation for 5 bodies, over 50 
>>> 000 000
>>> > iterations [1].
>>> >
>>> > When running this program in d8:
>>> >
>>> >> time ./d8 ./nbody_plain_objects.js -- 50000000
>>> > -0.169075164
>>> > -0.169059907
>>> > ./d8 ./nbody_plain_objects.js -- 50000000  46.95s user 0.07s system 
>>> 99% cpu
>>> > 47.036 total
>>> >
>>> > Now compare the same results with Mozilla's SpiderMonkey shell [2]:
>>> >
>>> >> time ./js ./nbody_plain_objects.js 50000000
>>> > -0.169075164
>>> > -0.169059907
>>> > ./js ./nbody_plain_objects.js 50000000  20.27s user 0.02s system 99% 
>>> cpu
>>> > 20.288 total
>>> >
>>> > SpiderMonkey is more than two times faster! Why? Turns out V8 produces 
>>> a lot
>>> > of numbers on the heap, when using plain javascript objects:
>>> >
>>> >  function Body(x,y,...) {
>>> >       this.X = x;
>>> >       this.Y = y; ...
>>> >
>>> > }
>>> >
>>> > This creates lots of garbage and slows down performance of the 
>>> algorithm
>>> > significantly. The garbage collection could be avoided in V8, if 
>>> program is
>>> > rewritten with use of Float64Arrays, and manual implementation of 
>>> heap-like
>>> > structure. Doing so [3], puts v8 on the same speed level with 
>>> SpiderMonkey:
>>> >
>>> >> time ./d8 ./nbody_array.js -- 50000000
>>> > -0.169075164
>>> > -0.169059907
>>> > ./d8 ./nbody_array.js -- 50000000  21.45s user 0.02s system 99% cpu 
>>> 21.487
>>> > total
>>> >
>>> > SpiderMonkey insignificantly suffers, but still shows decent results:
>>> >> time ./js ./nbody_array.js 50000000
>>> > -0.169075164
>>> > -0.169059907
>>> > ./js ./nbody_array.js 50000000  23.73s user 0.02s system 99% cpu 23.749
>>> > total
>>> >
>>> > I definitely could rewrite my programs with use of native arrays, but I
>>> > don't really think this scales well for the larger audience of 
>>> programmers,
>>> > who are doing calculus in JS. Maybe I'm missing a technique which 
>>> would let
>>> > me avoid GCs at all, with no need to rewrite programs? I would also 
>>> like to
>>> > avoid imposing on my users a need to launch chrome (v8) with a special
>>> > flags...
>>> >
>>> > PS: All tests are done on MacBook Pro, 2.4 GHz, Intel Core i5, with 
>>> latest
>>> > x64.release build of V8, and latest nightly build of SpiderMonkey [2].
>>> >
>>> > [1] https://gist.github.com/anvaka/5438615
>>> > [2] 
>>> http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk/
>>> > [3] https://gist.github.com/anvaka/5438702
>>>
>>> That's because in V8 floating point numbers are (mostly) heap
>>> allocated while SM uses NaN tagging (though it calls it nun boxing, I
>>> believe.)
>>>
>>> If you run your benchmarks with integral types instead, I'll bet good
>>> money that V8 comes out on top.
>>>
>>>
>

-- 
-- 
v8-users mailing list
[email protected]
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to