> We should get rid of long-running unit tests altogether. They should run 
> faster or be split.

I think we just need to evaluate on a case-by-case basis. Some tests are bad 
and need to go. And we need other/better ones to replace them. I am 
deliberately not making examples here both to avoid controversy and to 
highlight this will be a long process. 

> I'm still confused about the distinction between burn and fuzz tests - it 
> seems to me that fuzz tests are just modern burn tests - should we refactor 
> the existing burn tests to use the new framework?

At the particular moment we do not have a corerent generator framework. We have 
like 15 different ways to generate data and run tests. We need to evaluate, and 
bring them together.

> 3. Simulation tests - since you say they provide a way to execute a test 
> deterministically, it should be a property of unit tests - well, a unit test 
> is either deterministic or a fuzz test.

Unit tests do not have a guarantee of determinism. The fact that you have 
determinism from perspective of API (i.e. it is driven by a single thread), has 
no implications about the behaviour of the system. Simulator guarantees all 
executions, also concurrent ones, are fully deterministic, including messaging, 
executors, threads, delays, timeouts, etc. 

> I've noticed many "sleeps" in the tests - is it possible with simulation 
> tests to artificially move the clock forward by, say, 5 seconds instead of 
> sleeping just to test, for example whether TTL works?)

Yes, simulator will skip the sleep and do a simulated sleep with a simulated 
clock instead. 

> Also, as we start refactoring the tests, it will be an excellent opportunity 
> to move to JUnit 5.

I am working on bringing Harry in-tree. I will need many reviewers and 
collaborators for making the test suite more powerful and coherent. It would be 
nice to be able to have a bit more lenience and flexibility and shorter 
tunarounds when we deal with tests, at least in early phases.

Thank you for the interest in the subject, I think we need to do a lot here.




On Fri, Dec 1, 2023, at 1:31 PM, Jacek Lewandowski wrote:
> Thanks for the exhaustive response, Alex :)
> 
> Let me bring my point of view:
> 
> 1. Since long tests are just unit tests that take a long time to run, it 
> makes sense to separate them for efficient parallelization in CI. Since we 
> are adding new tests, modifying the existing ones, etc., that should be 
> something maintainable; otherwise, the distinction makes no sense to me. For 
> example - adjust timeouts on CI to 1 minute per test class for "short" tests 
> and more for "long" tests. To satisfy CI, the contributor will have to either 
> make the test run faster or move it to the "long" tests. The opposite 
> enforcement could be more difficult, though it is doable as well - failing 
> the "long" test if it takes too little time and should be qualified as a 
> regular unit test. As I'm reading what I've just written, it sounds stupid :/ 
> We should get rid of long-running unit tests altogether. They should run 
> faster or be split.
> 
> 2. I'm still confused about the distinction between burn and fuzz tests - it 
> seems to me that fuzz tests are just modern burn tests - should we refactor 
> the existing burn tests to use the new framework?
> 
> 3. Simulation tests - since you say they provide a way to execute a test 
> deterministically, it should be a property of unit tests - well, a unit test 
> is either deterministic or a fuzz test. Is the simulation framework usable 
> for CQLTester-based tests? (side question here: I've noticed many "sleeps" in 
> the tests - is it possible with simulation tests to artificially move the 
> clock forward by, say, 5 seconds instead of sleeping just to test, for 
> example whether TTL works?)
> 
> 4. Yeah, running a complete suite for each artificially crafted configuration 
> brings little value compared to the maintenance and infrastructure costs. It 
> feels like we are running all tests a bit blindly, hoping we catch something 
> accidentally. I agree this is not the purpose of the unit tests and should be 
> covered instead by fuzz. For features like CDC, compression, different 
> sstable formats, trie memtable, commit log compression/encryption, system 
> directory keyspace, etc... we should have dedicated tests that verify just 
> that functionality
> 
> With more or more functionality offered by Cassandra, they will become a 
> significant pain shortly. Let's start thinking about concrete actions. 
> 
> Also, as we start refactoring the tests, it will be an excellent opportunity 
> to move to JUnit 5.
> 
> thanks,
> Jacek

Reply via email to