> One thing where this “could” come into play is that we currently run with
> different configs at the CI level and we might be able to make this happen at
> the class or method level instead..
It'd be great to be able to declaratively indicate which configurations a test
needed to exercise and
> A brief perusal shows jqwik as integrated with JUnit 5 taking a fairly
> interesting annotation-based approach to property testing. Curious if you've
> looked into or used that at all David (Capwell)? (link for the lazy:
>
> First of all - when you want to have a parameterized test case you do not
> have to make the whole test class parameterized - it is per test case. Also,
> each method can have different parameters.
This is a pretty compelling improvement to me having just had to use the
somewhat painful and
First of all - when you want to have a parameterized test case you do not
have to make the whole test class parameterized - it is per test case.
Also, each method can have different parameters.
For the extensions - we can have extensions which provide Cassandra
configuration, extensions which
Could you give (or link to) some examples of how this would actually benefit our test suites?On 12 Dec 2023, at 10:51, Jacek Lewandowski wrote:I have two major pros for JUnit 5:- much better support for parameterized tests- global test hooks (automatically detectable extensions) +
I have two major pros for JUnit 5:
- much better support for parameterized tests
- global test hooks (automatically detectable extensions) +
multi-inheritance
pon., 11 gru 2023 o 13:38 Benedict napisał(a):
> Why do we want to move to JUnit 5?
>
> I’m generally opposed to churn unless well
Why do we want to move to JUnit 5? I’m generally opposed to churn unless well justified, which it may be - just not immediately obvious to me.On 11 Dec 2023, at 08:33, Jacek Lewandowski wrote:Nobody referred so far to the idea of moving to JUnit 5, what are the opinions?niedz., 10 gru 2023 o
Nobody referred so far to the idea of moving to JUnit 5, what are the
opinions?
niedz., 10 gru 2023 o 11:03 Benedict napisał(a):
> Alex’s suggestion was that we meta randomise, ie we randomise the config
> parameters to gain better rather than lesser coverage overall. This means
> we cover
Alex’s suggestion was that we meta randomise, ie we randomise the config
parameters to gain better rather than lesser coverage overall. This means we
cover these specific configs and more - just not necessarily on any single
commit.
I strongly endorse this approach over the status quo.
> On 8
> Unit tests that fail consistently but only on one configuration, should not
> be removed/replaced until the replacement also catches the failure.
> along the way, people have decided a certain configuration deserves
> additional testing and it has been done this way in lieu of any other more
>
> I think everyone agrees here, but…. these variations are still catching
>> failures, and until we have an improvement or replacement we do rely on
>> them. I'm not in favour of removing them until we have proof /confidence
>> that any replacement is catching the same failures. Especially
>
> It would be great to setup a JUnitRunner using the simulator and find out
> though.
>
I like this idea - this is what I meant when asking about the current unit
tests - to me, a test is either simulation or a fuzz. Due to pretty random
execution order of unit tests, all of them can be
My logic here was that CQLTester tests would probably be the best candidate as
they are largely single-threaded and single-node. I'm sure there are background
processes that might slow things down when serialised into a single execution
thread, but my expectation would be that it will not be as
I think the biggest impediment to that is that most tests are probably not sufficiently robust for simulation. If things happen in a surprising order many tests fail, as they implicitly rely on the normal timing of things.Another issue is that the simulator does potentially slow things down a
We have been extensively using simulator for TCM, and I think we have make
simulator tests more approachable. I think many of the existing tests should be
ran under simulator instead of CQLTester, for example. This will both
strengthen the simulator, and make things better in terms of
To be fair, the lack of coherent framework doesn’t mean we can’t merge them
from a naming perspective. I don’t mind losing one of burn or fuzz, and merging
them.
Today simulator tests are kept under the simulator test tree but that primarily
exists for the simulator itself and testing it. It’s
Yes, the only system/real-time timeout is a progress one, wherein if nothing
happens for ten minutes we assume the simulation has locked up. Hitting this is
indicative of a bug, and the timeout is so long that no realistic system
variability could trigger it.
> On 7 Dec 2023, at 14:56, Brandon
On Thu, Dec 7, 2023 at 8:50 AM Alex Petrov wrote:
> > I've noticed many "sleeps" in the tests - is it possible with simulation
> > tests to artificially move the clock forward by, say, 5 seconds instead of
> > sleeping just to test, for example whether TTL works?)
>
> Yes, simulator will skip
> We should get rid of long-running unit tests altogether. They should run
> faster or be split.
I think we just need to evaluate on a case-by-case basis. Some tests are bad
and need to go. And we need other/better ones to replace them. I am
deliberately not making examples here both to avoid
>
> 1. Since long tests are just unit tests that take a long time to run,
>
Yes, they are just "resource intensive" tests, on par to the "large" python
dtests. they require more machine specs to run.
They are great candidates to improve so they don't require additional
resources, but many often
Thanks for the exhaustive response, Alex :)
Let me bring my point of view:
1. Since long tests are just unit tests that take a long time to run, it
makes sense to separate them for efficient parallelization in CI. Since we
are adding new tests, modifying the existing ones, etc., that should be
I will try to resopnd, but please keep in mind that all these terms are
somewhat contextual.
I think long and burn tests are somewhat synonymous. But most long/burn tests
that we have in-tree aren't actually that long. They are just long compared to
the unit tests. I personally would call the
I don’t know - I’m not sure what fuzz test means in this context. It’s a newer concept that I didn’t introduce.On 30 Nov 2023, at 20:06, Jacek Lewandowski wrote:How those burn tests then compare to the fuzz tests? (the new ones)czw., 30 lis 2023, 20:22 użytkownik Benedict
How those burn tests then compare to the fuzz tests? (the new ones)
czw., 30 lis 2023, 20:22 użytkownik Benedict napisał:
> By “could run indefinitely” I don’t mean by default they run forever.
> There will be parameters that change how much work is done for a given run,
> but just running
By “could run indefinitely” I don’t mean by default they run forever. There will be parameters that change how much work is done for a given run, but just running repeatedly (each time with a different generated seeds) is the expected usage. Until you run out of compute or patience.I agree they
> that may be long-running and that could be run indefinitely
Perfect. That was the distinction I wasn't aware of. Also means having the burn
target as part of regular CI runs is probably a mistake, yes? i.e. if someone
adds a burn tests that runs indefinitely, are there any guardrails or
A burn test is a randomised test targeting broad coverage of a single system, subsystem or utility, that may be long-running and that could be run indefinitely, each run providing incrementally more assurance of quality of the system.A long test is a unit test that sometimes takes a long time to
Strongly agree. I started working on a declarative refactor out of our CI
configuration so circle, ASFCI, and other systems could inherit from it (for
instance, see pre-commit pipeline declaration here
28 matches
Mail list logo