> One thing where this “could” come into play is that we currently run with 
> different configs at the CI level and we might be able to make this happen at 
> the class or method level instead..
It'd be great to be able to declaratively indicate which configurations a test 
needed to exercise and we just have 1 CI run that includes them as appropriate. 

On Mon, Dec 18, 2023, at 7:22 PM, David Capwell wrote:
>> A brief perusal shows jqwik as integrated with JUnit 5 taking a fairly 
>> interesting annotation-based approach to property testing. Curious if you've 
>> looked into or used that at all David (Capwell)? (link for the lazy: 
>> https://jqwik.net/docs/current/user-guide.html#detailed-table-of-contents).
> 
> I have not no.  Looking at your link it moves from lambdas to annotations, 
> and tries to define a API for stateful… I am neutral to that as its mostly 
> style…. One thing to call out is that the project documents it tries to 
> “shrink”… we ended up disabling this in QuickTheories as shrinking doesn’t 
> work well for many of our tests (too high resource demand and unable to 
> actually shrink once you move past trivial generators).  Looking at their 
> docs and their code, its hard for me to see how we actually create C* 
> generators… its so much class gen magic that I really don’t see how to create 
> AbstractType or TableMetadata… the only example they gave was not random data 
> but hand crafted data… 
> 
>> moving to JUnit 5
> 
> I am a fan of this.  If we add dependencies and don’t keep update with them 
> it becomes painful over time (missing features, lack of support, etc).  
> 
>> First of all - when you want to have a parameterized test case you do not 
>> have to make the whole test class parameterized - it is per test case. Also, 
>> each method can have different parameters.
> 
> I strongly prefer this, but never had it as a blocker from me doing param 
> tests…. One thing where this “could” come into play is that we currently run 
> with different configs at the CI level and we might be able to make this 
> happen at the class or method level instead..
> 
> @ServerConfigs(all) // can exclude unsupported configs
> public class InsertTest
> 
> It bothers me deeply that we run tests that don’t touch the configs we use in 
> CI, causing us to waste resources… Can we solve this in junit4 param logic… 
> no clue… 
> 
>> On Dec 15, 2023, at 6:52 PM, Josh McKenzie <jmcken...@apache.org> wrote:
>> 
>>> First of all - when you want to have a parameterized test case you do not 
>>> have to make the whole test class parameterized - it is per test case. 
>>> Also, each method can have different parameters.
>> This is a pretty compelling improvement to me having just had to use the 
>> somewhat painful and blunt instrument of our current framework's 
>> parameterization; pretty clunky and broad.
>> 
>> It also looks like they moved to a "test engine abstracted away from test 
>> identification" approach to their architecture in 5 w/the "vintage" model 
>> providing native unchanged backwards-compatibility w/junit 4. Assuming they 
>> didn't bork up their architecture that *should* lower risk of the framework 
>> change leading to disruption or failure (famous last words...).
>> 
>> A brief perusal shows jqwik as integrated with JUnit 5 taking a fairly 
>> interesting annotation-based approach to property testing. Curious if you've 
>> looked into or used that at all David (Capwell)? (link for the lazy: 
>> https://jqwik.net/docs/current/user-guide.html#detailed-table-of-contents).
>> 
>> On Tue, Dec 12, 2023, at 11:39 AM, Jacek Lewandowski wrote:
>>> First of all - when you want to have a parameterized test case you do not 
>>> have to make the whole test class parameterized - it is per test case. 
>>> Also, each method can have different parameters.
>>> 
>>> For the extensions - we can have extensions which provide Cassandra 
>>> configuration, extensions which provide a running cluster and others. We 
>>> could for example apply some extensions to all test classes externally 
>>> without touching those classes, something like logging the begin and end of 
>>> each test case. 
>>> 
>>> 
>>> 
>>> wt., 12 gru 2023 o 12:07 Benedict <bened...@apache.org> napisał(a):
>>>> 
>>>> Could you give (or link to) some examples of how this would actually 
>>>> benefit our test suites?
>>>> 
>>>> 
>>>>> On 12 Dec 2023, at 10:51, Jacek Lewandowski <lewandowski.ja...@gmail.com> 
>>>>> wrote:
>>>>> 
>>>>> I have two major pros for JUnit 5:
>>>>> - much better support for parameterized tests
>>>>> - global test hooks (automatically detectable extensions) + 
>>>>> multi-inheritance
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> pon., 11 gru 2023 o 13:38 Benedict <bened...@apache.org> napisał(a):
>>>>>> 
>>>>>> Why do we want to move to JUnit 5? 
>>>>>> 
>>>>>> I’m generally opposed to churn unless well justified, which it may be - 
>>>>>> just not immediately obvious to me.
>>>>>> 
>>>>>> 
>>>>>>> On 11 Dec 2023, at 08:33, Jacek Lewandowski 
>>>>>>> <lewandowski.ja...@gmail.com> wrote:
>>>>>>> 
>>>>>>> Nobody referred so far to the idea of moving to JUnit 5, what are the 
>>>>>>> opinions?
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> niedz., 10 gru 2023 o 11:03 Benedict <bened...@apache.org> napisał(a):
>>>>>>>> 
>>>>>>>> Alex’s suggestion was that we meta randomise, ie we randomise the 
>>>>>>>> config parameters to gain better rather than lesser coverage overall. 
>>>>>>>> This means we cover these specific configs and more - just not 
>>>>>>>> necessarily on any single commit.
>>>>>>>> 
>>>>>>>> I strongly endorse this approach over the status quo.
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> On 8 Dec 2023, at 13:26, Mick Semb Wever <m...@apache.org> wrote:
>>>>>>>>> 
>>>>>>>>>  
>>>>>>>>>  
>>>>>>>>>  
>>>>>>>>>> 
>>>>>>>>>>> I think everyone agrees here, but…. these variations are still 
>>>>>>>>>>> catching failures, and until we have an improvement or replacement 
>>>>>>>>>>> we do rely on them.   I'm not in favour of removing them until we 
>>>>>>>>>>> have proof /confidence that any replacement is catching the same 
>>>>>>>>>>> failures.  Especially oa, tries, vnodes. (Not tries and offheap is 
>>>>>>>>>>> being replaced with "latest", which will be valuable 
>>>>>>>>>>> simplification.)  
>>>>>>>>>> 
>>>>>>>>>> What kind of proof do you expect? I cannot imagine how we could 
>>>>>>>>>> prove that because the ability of detecting failures results from 
>>>>>>>>>> the randomness of those tests. That's why when such a test fail you 
>>>>>>>>>> usually cannot reproduce that easily.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Unit tests that fail consistently but only on one configuration, 
>>>>>>>>> should not be removed/replaced until the replacement also catches the 
>>>>>>>>> failure.
>>>>>>>>>  
>>>>>>>>>> We could extrapolate that to - why we only have those 
>>>>>>>>>> configurations? why don't test trie / oa + compression, or CDC, or 
>>>>>>>>>> system memtable? 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Because, along the way, people have decided a certain configuration 
>>>>>>>>> deserves additional testing and it has been done this way in lieu of 
>>>>>>>>> any other more efficient approach.

Reply via email to