> First of all - when you want to have a parameterized test case you do not 
> have to make the whole test class parameterized - it is per test case. Also, 
> each method can have different parameters.
This is a pretty compelling improvement to me having just had to use the 
somewhat painful and blunt instrument of our current framework's 
parameterization; pretty clunky and broad.

It also looks like they moved to a "test engine abstracted away from test 
identification" approach to their architecture in 5 w/the "vintage" model 
providing native unchanged backwards-compatibility w/junit 4. Assuming they 
didn't bork up their architecture that *should* lower risk of the framework 
change leading to disruption or failure (famous last words...).

A brief perusal shows jqwik as integrated with JUnit 5 taking a fairly 
interesting annotation-based approach to property testing. Curious if you've 
looked into or used that at all David (Capwell)? (link for the lazy: 
https://jqwik.net/docs/current/user-guide.html#detailed-table-of-contents).

On Tue, Dec 12, 2023, at 11:39 AM, Jacek Lewandowski wrote:
> First of all - when you want to have a parameterized test case you do not 
> have to make the whole test class parameterized - it is per test case. Also, 
> each method can have different parameters.
> 
> For the extensions - we can have extensions which provide Cassandra 
> configuration, extensions which provide a running cluster and others. We 
> could for example apply some extensions to all test classes externally 
> without touching those classes, something like logging the begin and end of 
> each test case. 
> 
> 
> 
> wt., 12 gru 2023 o 12:07 Benedict <bened...@apache.org> napisał(a):
>> 
>> Could you give (or link to) some examples of how this would actually benefit 
>> our test suites?
>> 
>> 
>>> On 12 Dec 2023, at 10:51, Jacek Lewandowski <lewandowski.ja...@gmail.com> 
>>> wrote:
>>> 
>>> I have two major pros for JUnit 5:
>>> - much better support for parameterized tests
>>> - global test hooks (automatically detectable extensions) + 
>>> multi-inheritance
>>> 
>>> 
>>> 
>>> 
>>> pon., 11 gru 2023 o 13:38 Benedict <bened...@apache.org> napisał(a):
>>>> 
>>>> Why do we want to move to JUnit 5? 
>>>> 
>>>> I’m generally opposed to churn unless well justified, which it may be - 
>>>> just not immediately obvious to me.
>>>> 
>>>> 
>>>>> On 11 Dec 2023, at 08:33, Jacek Lewandowski <lewandowski.ja...@gmail.com> 
>>>>> wrote:
>>>>> 
>>>>> Nobody referred so far to the idea of moving to JUnit 5, what are the 
>>>>> opinions?
>>>>> 
>>>>> 
>>>>> 
>>>>> niedz., 10 gru 2023 o 11:03 Benedict <bened...@apache.org> napisał(a):
>>>>>> 
>>>>>> Alex’s suggestion was that we meta randomise, ie we randomise the config 
>>>>>> parameters to gain better rather than lesser coverage overall. This 
>>>>>> means we cover these specific configs and more - just not necessarily on 
>>>>>> any single commit.
>>>>>> 
>>>>>> I strongly endorse this approach over the status quo.
>>>>>> 
>>>>>> 
>>>>>>> On 8 Dec 2023, at 13:26, Mick Semb Wever <m...@apache.org> wrote:
>>>>>>> 
>>>>>>>  
>>>>>>>  
>>>>>>>  
>>>>>>>> 
>>>>>>>>> I think everyone agrees here, but…. these variations are still 
>>>>>>>>> catching failures, and until we have an improvement or replacement we 
>>>>>>>>> do rely on them.   I'm not in favour of removing them until we have 
>>>>>>>>> proof /confidence that any replacement is catching the same failures. 
>>>>>>>>>  Especially oa, tries, vnodes. (Not tries and offheap is being 
>>>>>>>>> replaced with "latest", which will be valuable simplification.)  
>>>>>>>> 
>>>>>>>> What kind of proof do you expect? I cannot imagine how we could prove 
>>>>>>>> that because the ability of detecting failures results from the 
>>>>>>>> randomness of those tests. That's why when such a test fail you 
>>>>>>>> usually cannot reproduce that easily.
>>>>>>> 
>>>>>>> 
>>>>>>> Unit tests that fail consistently but only on one configuration, should 
>>>>>>> not be removed/replaced until the replacement also catches the failure.
>>>>>>>  
>>>>>>>> We could extrapolate that to - why we only have those configurations? 
>>>>>>>> why don't test trie / oa + compression, or CDC, or system memtable? 
>>>>>>> 
>>>>>>> 
>>>>>>> Because, along the way, people have decided a certain configuration 
>>>>>>> deserves additional testing and it has been done this way in lieu of 
>>>>>>> any other more efficient approach.
>>>>>>> 
>>>>>>> 
>>>>>>> 

Reply via email to