We should be careful not to conflate running of unit tests with
automated reporting, and the differing roles that flakey tests play in
different scenarios.
For example, I no longer pay attention to automated failure reports,
esp if I haven't committed anything recently.
However, when I'm making code changes and do "ant test", I certainly
pay attention to failures and re-run any failing tests.  It sucks to
have to re-run a test just because it's flakey, but it's better than
accidentally committing a bug because test coverage was reduced.

I'd suggest:
1) fix/tweak automated test reporting to increase relevancy for developers
2) open a JIRA for each flakey test and evaluate impact of removal on
test coverage
3) If a new feature is added, and the test turns out to be flakey,
then the feature itself should be disabled before release.  This
prevents both new flakey tests without resulting in loss of test
coverage, as well as motivates those who care about the feature to fix
the tests.
4) fix flakey tests ;-)

-Yonik

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to