I agree with Jacek, I don't quite understand why we are running the
pipeline for j17 and j11 every time. I think this should be opt-in.
Majority of the time, we are just refactoring and coding stuff for
Cassandra where testing it for both jvms is just pointless and we _know_
that it will be fine in 11 and 17 too because we do not do anything
special. If we find some subsystems where testing that on both jvms is
crucial, we might do that, I just do not remember when it was last time
that testing it in both j17 and j11 suddenly uncovered some bug. Seems more
like a hassle.

We might then test the whole pipeline with a different config basically for
same time as we currently do.

On Wed, Feb 14, 2024 at 9:32 PM Jacek Lewandowski <
lewandowski.ja...@gmail.com> wrote:

> śr., 14 lut 2024 o 17:30 Josh McKenzie <jmcken...@apache.org> napisał(a):
>
>> When we have failing tests people do not spend the time to figure out if
>> their logic caused a regression and merge, making things more unstable… so
>> when we merge failing tests that leads to people merging even more failing
>> tests...
>>
>> What's the counter position to this Jacek / Berenguer?
>>
>
> For how long are we going to deceive ourselves? Are we shipping those
> features or not? Perhaps it is also a good opportunity to distinguish
> subsets of tests which make sense to run with a configuration matrix.
>
> If we don't add those tests to the pre-commit pipeline, "people do not
> spend the time to figure out if their logic caused a regression and merge,
> making things more unstable…"
> I think it is much more valuable to test those various configurations
> rather than test against j11 and j17 separately. I can see a really little
> value in doing that.
>
>
>

Reply via email to