Hi; I'm trying to understand exactly what the issue is. I've had occasion to run a few small crawls using the default embedded Hadoop, both with JVM 17 and 21. Actually, at my former employer, Nutch is running on Java 21, last I saw.
In a full production environment, if Nutch is using an external Hadoop, and if Hadoop has difficulties with Java 17+ at runtime, then, Nutch and Hadoop can each run on their own JVM. Otherwise, if it's compile-time, we don't have to wait for Hadoop to compile with Java 17. Or am I completely off-track ? Isabelle Giguère Le mar. 3 févr. 2026 à 16:52, Sebastian Nagel <[email protected]> a écrit : > Hi everybody, > > the current Nutch development is ready for Java 17 > with NUTCH-2971 fixed - thanks Isabelle! > > By now Nutch does not require Java 17 at compile or > run time. Java 11 is still sufficient. > > This is good because Hadoop still does not guarantee > full support of Java 17 [1]. > > However, staying compatible with Java 11 becomes a burden > because more and more dependencies require an upgrade to Java 17. > We have already two PRs open which are great improvements > (thanks to Lewis!) but would require Java 17: > - index-geoip NUTCH-3064 / PR #825 [2] > - JUnit 6 NUTCH-3145 / PR #883 [3] > > We now have the following options for the next release (1.22): > > 1. stay on Java 11 > > 2. require Java 17 at compile time, but compile using "-target 11" > to stay compatible with Java 11 at runtime > > 3. drop support for Java 11 and switch to Java 17 > > > I'm leaning in favor of option 1 and try to release Nutch 1.22 > during the next weeks. After the release go to option 3. > > > Please share your thoughts and opinions! > > > ~Sebastian > > > [1] https://issues.apache.org/jira/browse/HADOOP-17177 > [2] https://github.com/apache/nutch/pull/825 > [3] https://github.com/apache/nutch/pull/883 >

