Hello everyone,

a few months ago, I have enabled test coverage recording for the MXNet
repository [1]. The next step would be to report the coverage changes to
pull requests so we can make them part of the review. For this, I've
created a pull request here at [2].

Before this feature can be enabled though, we have to stabilize the test
coverage. Due to the extensive usage of random inputs in our tests, the
execution takes different paths and thus, the coverage results undergo a
heavy variance. If we would enable the coverage report for PRs, they would
be marked as increased/decreased coverage without actually doing so. To get
a few examples, visit [3].

I'd like to provide some detailed examples or screenshots, but with big
reports [4], CodeCov's webinterface tends to time out. I currently have an
open ticket with CodeCov that will hopefully improve the situation soon,
but we have to live with the timeouts until then.

My proposal to improve the situation would be to divide the work across the
community in two stages:
1. Replace random inputs in tests with deterministic ones if applicable.
2. If randomness cannot be prevented or this 'flakiness' persists, we could
run the same test multiple times with coverage, look at the coverage diff
and then write targeted unittests for the inconsistent parts.

If there is more interest towards test coverage, I'd be happy to write a
guide that explains how to measure test coverage and detect flaky coverage
areas so everybody can help contribute towards a stable test coverage.
Please let me know what you think.

Best regards,
Marco

[1]: https://codecov.io/gh/apache/incubator-mxnet
[2]: https://github.com/apache/incubator-mxnet/pull/12648
[3]: https://codecov.io/gh/apache/incubator-mxnet/pulls
[4]: https://codecov.io/gh/apache/incubator-mxnet/pull/13849

Reply via email to