Comment #3 on issue 1750 by wonchun: Improve performance of mesh
decompression benchmark
http://code.google.com/p/v8/issues/detail?id=1750
One important detail for (1): the strings being processed are not ASCII.
Since (2) was on my end, I decided to change the associative array to a
regular array, and it indeed appears to help a bit. However, FF is still
~2x as fast.
If it wasn't already obvious, most of the time is spent in
decompressInner_. It was a win to pull it out-of-line, but maybe that had
something to do with the interactions with for...in. My original suspicion
when I did it was that there was function size heuristic that I needed to
get around. I was actually hoping that the JIT would notice that most of
the parameters didn't change with every invocation (namely stride,
decodeOffset, decodeScale) and that specialized versions of the method with
baked-in constants were being generated. Do you think it would be worth
writing the specialization code in JS? Is V8 as effective at optimizing
functions that are built at runtime using strings? Or should I do this with
a higher-order method? Is this something I should expect V8 to do
automatically?
Thanks so much for looking at this!
--
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev