As Jochen already said on chromium-dev, --always-opt does not make things
faster. This is expected. The purpose of the flag is to flush out certain
kinds of bugs when running tests, at the cost of a big slowdown.

Code caching has limits. It cannot cache everything.

The default configuration is what we believe gives the best performance in
general cases. There are no "secret" flags to make things faster.

On Fri, Apr 21, 2017 at 9:27 AM, Jin Chul Kim <jinc...@gmail.com> wrote:

> Hello,
>
> I am trying to reduce a execution time on general cases. My approach is a
> combination of two features: fully optimized code generation + code caching.
> In my experiment, I figured out that code caching is very powerful. By the
> way, the execution time was significantly increased when I was using the
> following flags: --always-opt. I know turboFan was enabled on the recent V8
> code. Here are my question.
>
> 1. As far as I know, code caching does not need code compilation to
> generate machine(native) code. Is that correct?
>
> I checked trace with --trace-opt. There were many lines for optimizing and
> compilation on code caching. why did them happen?
>
> [compiling method 0xed4b9e340c1 <JSFunction BenchmarkSuite.NotifyStep (sfi
> = 0xed4b9e31ea1)> using TurboFan]
> [optimizing 0xed4b9e340c1 <JSFunction BenchmarkSuite.NotifyStep (sfi =
> 0xed4b9e31ea1)> - took 0.081, 0.356, 0.039 ms]
> [compiling method 0x3d3958a8f609 <JSFunction RunNextTearDown (sfi =
> 0xed4b9e41829)> using TurboFan]
> [optimizing 0x3d3958a8f609 <JSFunction RunNextTearDown (sfi =
> 0xed4b9e41829)> - took 0.109, 0.560, 0.060 ms]
> [compiling method 0x3d3958a93a81 <JSFunction Benchmark.TearDown (sfi =
> 0xed4b9e3d401)> using TurboFan]
> [optimizing 0x3d3958a93a81 <JSFunction Benchmark.TearDown (sfi =
> 0xed4b9e3d401)> - took 0.028, 0.086, 0.011 ms]
> ...
>
> 2. Do you explain why execution time is significantly increased on w/ opt.
> + w/ caching (examples 4) and 5) below)?
>
> If I was using the flag(--always-opt), the compiler may generates
> optimized or unoptimized code. Then, I think the second run should be same
> or better performance than the first run because it does not require
> compilation and just loads binary. Please see my experiment result as below:
>
> - baseline: w/o opt. + w/o caching
> 1) 24.06 secs
>
> - w/o opt. + w/ caching
> 2) 1st run(save native code): 24.35 secs
> 3) 2nd run(load native code): 16.94 secs
>
> - w/ opt. + w/ caching
> 4) 1st run(save native code): 75.12 secs
> 5) 2nd run(load native code): 74.02 secs
>
> 3. How may I generate an optimal code to decrease an execution time on
> code caching?
>
> Many thanks,
> Jinchul
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to