On Fri, Apr 21, 2017 at 11:21 AM, Johannes Rieken wrote:
> For some background: We are using cached data since a while in VS Code and
> are quite happy with it. It's one of the simpler things to improve start up
> time. The only downside is the slow down on first
As Jochen already said on chromium-dev, --always-opt does not make things
faster. This is expected. The purpose of the flag is to flush out certain
kinds of bugs when running tests, at the cost of a big slowdown.
Code caching has limits. It cannot cache everything.
The default configuration is
For some background: We are using cached data since a while in VS Code and
are quite happy with it. It's one of the simpler things to improve start up
time. The only downside is the slow down on first start when we generate
and store the cached data and I was thinking about making our build
Thanks for clarifying!
On Friday, April 21, 2017 at 10:00:03 AM UTC+2, Ben Noordhuis wrote:
>
> On Fri, Apr 21, 2017 at 9:51 AM, Johannes Rieken
> wrote:
> > Does the data depend on things like endian-ness, CPU etc or only
> > on v8-locals like v8-version and v8-flags?
As Ben says: The code cache is specific the exact source string, the CPU
(cpu family & cpu flags, e.g. whether certain features are supported), the
exact V8 version (since there is no mechanism to guarantee correctness
across versions), and the compile options (debug, release, certain features
On Fri, Apr 21, 2017 at 9:51 AM, Johannes Rieken
wrote:
> Does the data depend on things like endian-ness, CPU etc or only
> on v8-locals like v8-version and v8-flags?
All of the above; it's machine-, version- and invocation-specific.
--
--
v8-users mailing list
Hello,
I am trying to reduce a execution time on general cases. My approach is a
combination of two features: fully optimized code generation + code caching.
In my experiment, I figured out that code caching is very powerful. By the
way, the execution time was significantly increased when I was