On Thu, Apr 28, 2016 at 8:48 AM, Nicholas Nethercote <[email protected]
> wrote:

>
> - Would "extended assertions" help? By this I mean verification passes
> over complex data structures. Compilers often have these, e.g. after
> each pass you can optionally run a pass that does a thorough sanity
> check of the IR. Do we have that for the JITs? Would something like
> that make sense for GC? ("Code generators and garbage collectors
> should crash as early and as loudly as possible.")
>

We do have this for IonMonkey in debug builds
- After every pass we call AssertBasicGraphCoherency, AssertGraphCoherency
or AssertExtendedGraphCoherency.
- For every LIR we assert that the result is in the expected type (wrt to
TI info)

Both happen only in debug builds and make JS painfully slow. (Not an issue
in the shell)
Quite annoying in the browser. Maybe as a result less/no-one is running
debug builds anymore?

- Should we look at how to make browser not a pain in debug builds (will
make it harder to get the expected asserts)? Hoping that more people run
it? Or will people still not run debug builds.
- Should we look into running those asserts in release builds (on nightly)
on a not often 'random' bases? Making sure it doesn't hurt performance too
much?

The asserts mentioned here were mostly added for fuzzers were deterministic
behaviour is very much needed.
Though the fuzzers only work on shells. Do we need take a different
approach on browser builds?

Note: this is probably a small category of failures. I assume bug 1268029
will help a lot! Instead of only having one bucket

- How can we respond to problems? E.g. bug 1232229 as an example where
> a more aggressive approach to backouts would likely have resulted in a
> topcrash diagnosis occurring a lot earlier than it eventually did.
>

Another issue I want to raise is that we don't have any information about
EnterBaselineMethod increasing/decreasing over time.
Might help to have this static and see how it evolves over time. That way
we at least have an idea if we are improving/regressing ...
I have had patches were I would have expected a possible raise/decrease in
crashes. But since we have no information it is quite hard.
Having this metric (even over longer pushlog) would make it possible to
pinpoint possible patches and backout and see if that helps the stats.
(Not trivial to see, since we would have to see only nightly and de-bias
over the numer of users and hours used?)

Having this metric public gives the power to all JS people to look at it
and find ways to drive this down. A bit like what AWFY did for performance
regressions.
_______________________________________________
dev-tech-js-engine-internals mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals

Reply via email to