On 6/24/2013 8:50 PM, Clint Talbert wrote:
Decoder and Jcranmer got code coverage working on Try[1]. They'd like to expand this into something that runs automatically, generating results over time so that we can actually know what our code coverage status is with our major run-on-checkin test harnesses. While both Joduinn and I are happy to turn this on, we have been down this road before. We got code coverage stood up in 2008, ran it for a while, but when it became unusable and fell apart, we were left with no options but to turn it off.

I think one of the problems with the old code coverage stuff was that it got almost no visibility.


Now, Jcranmer and Decoder's work is of far higher quality than that old run, but before we invest the work in automating it, I want to know if this is going to be useful and whether or not I can depend on the community of platform developers to address inevitable issues where some checkin, somewhere breaks the code coverage build. Do we have your support? Will you find the generated data useful? I know I certainly would, but I need more buy-in than that (I can just use try if I'm the only one concerned about it). Let me know your thoughts on measuring code coverage and owning breakages to the code coverage builds.

If you are just attempting to cover C/C++ code, then code coverage amounts to a few extra flags in CFLAGS/CXXFLAGS/LDFLAGS. The biggest problem is that it increases runtime, which could push us past some timeout thresholds, and, if you use --disable-debug --enable-optimize='-g', some tests actually crash instead (mostly a set of tests in the addon manager).

I'm personally a bit of a data visualization junky. One of the projects I've started doing but haven't completed is getting an animated video of how code coverage evolves in our test suite. I ran an experiment several years ago where I built something that approximated the Thunderbird nightly revision and ran code coverage on its tests and made a video of the results (my blog post describing this is here: <http://quetzalcoatal.blogspot.com/2010/04/animated-code-coverage.html>; the video is no longer available, but I still have all of the source material lying around). This also leads me to build tools like <http://www.tjhsst.edu/~jcranmer/c-ccov/coverage.html?dir=mailnews>; to be able to do these kinds of projects, I want essentially just need the LCOV .info files (I use my own HTML generation scripts since I found LCOV to be too slow for me, and I have other minor UI gripes), especially if coverage is broken down by testsuite [1].


Also, what do people think about standing up JSLint as well (in a separate automation job)? We should treat these as two entirely separate things, but if that would be useful, we can look into that as well. We can configure the rules around JSLint to be amenable to our practices and simply enforce against specific errors we don't want in our JS code. If the JS style flamewars start-up, I'll split this question into its own thread because they are irrelevant to my objective here. I want to know if it would be useful to have something like this for JS or not. If we do decide to use something like JSLint, then I will be happy to facilitate JS-Style flamewars because they will then be relevant to defining what we want Lint to do but until that decision is made, let's hold them in check.

The code in mozilla-central and comm-central tends to aggressively use new JS features added to SpiderMonkey, such as yield, array comprehensions, etc. A brief test of JSLint shows that it doesn't support these features, which makes it a non-starter in my opinion. If we want JS static analysis tools running on our codebase, then they probably ought to be based on Reflect.parse in SpiderMonkey (which knows about these things), and arguably based on similar technology to the amo checker.

[1] Another long-term project I've had but haven't put enough time into is getting this kind of data on all platforms, not just Linux64.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to