I would classify the must things into:

1. CI infrastructure.
We must refactor the CI to supply regular checks for JDK 17, in both compiling 
and testing.
In my local play, I can compile the Hadoop code successful with tune some maven 
plugin args, while the CI for testing is a little bit challenging to me. The 
first challenge is HADOOP-19223, currently the CI is executed serially and fail 
fast, we even has no chance to run the Java code testing if the PR does not 
touch testing code. The second one is the parallelism, I have done a simple 
test in https://github.com/apache/hadoop/pull/6914, it takes over 24h to 
complete one round testing, it’s too long … in comparison, Spark takes about 
1~2 hours to complete a round test. BTW, the experiment shows that JDK 17 
requires more memory than 8, I have to extend the memory from the existed 22g 
to 32g to avoid OOM kill from Yetus.

I sincerely hope someone who familiar with Hadoop CI infrastructure can solve 
the CI issues first ...

2. JPMS args
HADOOP-19219 is a good start to add JPMS args for both testing and runtime, 
please visit the PR discussion for more details.

3. 3rd dependencies upgrading.
For deps which does not work properly on JDK 17, which can be identified by 
testing failures after we have done above two steps.

Another question is the target version, for example, if JDK 17 support targets 
to 3.4.x, we should port HADOOP-19107 to branch-3.4 too.

Thanks,
Cheng Pan

On 2024/07/29 19:03:31 Steve Loughran wrote:
> A lot of projects are moving off java8. making java17 the new baseline
> 
> what do we need to there that is blocker rather than just "nice"'?
> 
---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Reply via email to