Github user srowen commented on the issue: https://github.com/apache/spark/pull/22641 I would vote against running tests that we think have any value randomly. It's just the wrong way to solve problems, as much as it would be to simply run 90% of our test suites each time on the theory that eventually we'd catch bugs. If there are, say, 3 codecs, and the point is to test whether one specified codec overrides another, does that really need more than 1 test? is there any reason to believe that override works/doesn't work differently for different codecs? Or are 3 tests sufficient, one to test overriding of each? If not, I'd say do nothing. The maximum win here is about a minute of test time? not worth it. Or: can these cases be parallelized within the test suite?
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org