When I worked on Riak we had a much more complex matrix due to supporting even more backwards compatibility. It’s not unfeasible. You don’t have to run every suite on every commit since as folks have pointed out for the most part the JVM isn’t culprit. Need to run it enough times to catch when it is for some assumption of “enough”.
We used to run them closer to releases but that would often lead to a scramble to fix. Periodically would be my preference. And on demand as engineers see need when they think they may have changed something where the JVM could have an impact. However, one challenge is we may be more limited on test resources as I believe Mck pointed out on another thread. Jordan On Tue, May 20, 2025 at 19:52 Josh McKenzie <jmcken...@apache.org> wrote: > The problem with (2) being only "overlapping JDK version support on > consecutive releases" instead of an overlapping JDK over all `N-2` releases > is that we say we support upgrade paths that we never test (w/ > jvm-dtest-upgrade). Here, I would rather add a third LTS JDK to a release > to maintain that `N-2` testing, than to push something untested onto users. > > Good point. With 4.0 in the mix right now (i.e. 3 supported branches) we > have a pretty nasty matrix we'd have to test: > > - 4.0 (N-3): 8, 11 > - 4.1 (N-2): 8, 11 > - 5.0 (N-1): 11, 17 > - 6.0: 11, 17, 21 > > Upgrade tests: > > - JDK8: 4.0 -> 4.1 > - JDK11: 4.0 -> 4.1 > - JDK11: 4.0 -> 5.0 > - JDK11: 4.1 -> 6.0 > - JDK11: 5.0 -> 6.0 > - JDK17: 5.0 -> 6.0 > > If 4.0 weren't in the mix, we'd be looking at still having 4 jvm-upgrade > tests we'd need to run w/the "shared JDK for N-2" paradigm (as opposed to > "break JVM and C* version together"): > > - N-2 to N-1 on shared JDK-2 > - N-2 to N on shared JDK-2 > - N-1 to N on shared JDK-2 > - N-1 to N on shared JDK-1 (if we bump to a new JDK we support here). > > So yeah. I think we'll need to figure out how much coverage is reasonable > to call something "tested". I don't think it's sustainable for us to have, > at any given time, 3 branches we test across 3 JDK's each with all our > in-jvm test suites is it? > > On Tue, May 20, 2025, at 2:24 PM, Ekaterina Dimitrova wrote: > > Another thing to consider is the usage of JDK internals. The JDK > developers do not promise backward compatibility for internals. We still > have things like jamm that need updates, and not only jamm. Sometimes they > can fail us silently despite fully green CI. > > Performance is a good point - we don’t even have regular performance > testing. > > With that said - I see pros and cons in both suggestions here. Just wanted > to bring visibility to yet another wrinkle (the jdk internals usage) as I > am sure there will be a lot of people who haven’t even heard of jamm > probably. > > On Tue, 20 May 2025 at 14:09, Benedict <bened...@apache.org> wrote: > > > There are performance differences between JVMs. I agree that bug testing > of JVM versions for clients is not very important, but isolating JVM > characteristic changes from database characteristic changes is important, > for me at least. > > > On 20 May 2025, at 17:47, Jon Haddad <j...@rustyrazorblade.com> wrote: > > > > If you're upgrading an environment without doing any additional testing - > sure, it can be helpful to isolate the issue. > > However, outside of this scenario, where you actually test your upgrade > process and vet the functionality, I don't see it as a big gain - certainly > not enough of one to hold the project from moving forward. If there are > bugs in the JVM itself, they should have been found already. We're almost > 2 years behind in the LTS release, we have plenty of testing, this stuff > should be caught long before it's time to upgrade. > > The way I see it, we should do one of the following: > > * support multiple versions and only limited overlap. For example, we > support 11, 17, 21, 24 in 6.0, then 21, 24, 27 in 7.0. > * or do the two version thing and not bother with overlap. > > The main reason I can think of to continue to support older versions is > dependency compatibility. If we use C*-all within the bulk reader and that > requires supporting older JVMs for Spark itself. > > The other reasons (A/B JVM testing & debugging upgrade bugs) are pretty > weak in comparison to the gains to be had from moving forward. For > example, dropping older versions means we can: > > * Use generational ZGC (jdk 21+) as standard. No more long pauses. > * Move all allocation to per-thread arenas (leveraging memory layouts & > structured memory) to avoid allocation on the heap (21+) > * Ability to use virtual threads (I think 24+) > > With our current policy, this is our timetable: > > * 6.0 : 17 + 21. 2025 > * 70: 21 + 24. 2026 > * 8.0 24 + 27: 2027 > > We're a year away from having a release that can use any of the "newer" > JVM features and 2 years away from having the ability to do some > intelligent memory management. That's a long time for an entire community > to wait because of the users who want to do A/B tests against JVM versions, > or want the ability to debug potential java release issues in production > without a staging environment. > > Jon > > > > On Tue, May 20, 2025 at 9:07 AM Brandon Williams <dri...@gmail.com> wrote: > > On Tue, May 20, 2025 at 10:59 AM Jon Haddad <j...@rustyrazorblade.com> > wrote: > > > There is also that recommendation that I keep on hearing - don’t do C* > major upgrade and JDK upgrade simultaneously. I believe that was one of the > reasons for overlap too > > > > There's no practical reason for this today. Maybe in the Java 6 or 8 > days, sure. But now, it's a useless requirement. > > If I'm going to encounter a strange bug after upgrading, I'd like the > surface area to be limited to one of C* or the JVM if possible. > > Kind Regards, > Brandon > > >