>
> 1. Since long tests are just unit tests that take a long time to run,
>


Yes, they are just "resource intensive" tests, on par to the "large" python
dtests.  they require more machine specs to run.
They are great candidates to improve so they don't require additional
resources, but many often value and cannot.



> 2. I'm still confused about the distinction between burn and fuzz tests -
> it seems to me that fuzz tests are just modern burn tests - should we
> refactor the existing burn tests to use the new framework?
>


Burn are not really tests that belong in the CI pipeline. We only run them
in the CI pipeline to validate that they can still compile and run.  So we
only need to run them for an absolute minimum amount of time.  Maybe it
would be nice if they were part of the checks stage instead of being their
own test type.



> 4. Yeah, running a complete suite for each artificially crafted
> configuration brings little value compared to the maintenance and
> infrastructure costs. It feels like we are running all tests a bit blindly,
> hoping we catch something accidentally. I agree this is not the purpose of
> the unit tests and should be covered instead by fuzz. For features like
> CDC, compression, different sstable formats, trie memtable, commit log
> compression/encryption, system directory keyspace, etc... we should have
> dedicated tests that verify just that functionality
>


I think everyone agrees here, but…. these variations are still catching
failures, and until we have an improvement or replacement we do rely on
them.   I'm not in favour of removing them until we have proof /confidence
that any replacement is catching the same failures.  Especially oa, tries,
vnodes. (Not tries and offheap is being replaced with "latest", which will
be valuable simplification.)

Dedicated unit tests may also be parameterised tests with a base
parameterisation that extends off on analysis of what a patch touches…

Reply via email to