This is an automated email from the ASF dual-hosted git repository.
xushiyan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hudi.git
The following commit(s) were added to refs/heads/master by this push:
new 075fb030abc5 docs: Update minimum Java version to JDK 11 in
documentation (#17824)
075fb030abc5 is described below
commit 075fb030abc56c6f5489866fe1221a021f96297f
Author: Y Ethan Guo <[email protected]>
AuthorDate: Sat Jan 10 09:43:43 2026 -0800
docs: Update minimum Java version to JDK 11 in documentation (#17824)
---
README.md | 5 ++---
docker/README.md | 4 ++--
hudi-kafka-connect/README.md | 2 +-
.../src/test/resources/upgrade-downgrade-fixtures/README.md | 2 +-
release/release_guide.md | 12 +++++-------
5 files changed, 11 insertions(+), 14 deletions(-)
diff --git a/README.md b/README.md
index 57eb0961ec43..219056b596e5 100644
--- a/README.md
+++ b/README.md
@@ -94,7 +94,7 @@ Learn more about Hudi at
[https://hudi.apache.org](https://hudi.apache.org)
Prerequisites for building Apache Hudi:
* Unix-like system (like Linux, Mac OS X)
-* Java 8, 11 or 17
+* Java 11 or 17
* Git
* Maven (>=3.6.0)
@@ -163,8 +163,7 @@ Starting from versions 0.11, Hudi no longer requires
`spark-avro` to be specifie
The default Flink version supported is 1.20. The default Flink 1.20.x version,
corresponding to `flink1.20` profile is 1.20.1.
Flink is Scala-free since 1.15.x, there is no need to specify the Scala
version for Flink 1.15.x and above versions.
-Refer to the table below for building with different Flink and Scala versions.
Besides, Flink 2.x do not support Java 8
-anymore, so it's not set as the default Flink version since Java 8 is the
default Java version for Hudi now.
+Refer to the table below for building with different Flink and Scala versions.
| Maven build options | Expected Flink bundle jar name | Notes
|
|:--------------------|:-------------------------------|:---------------------------------|
diff --git a/docker/README.md b/docker/README.md
index 0851e9b5b785..718d1943ef7e 100644
--- a/docker/README.md
+++ b/docker/README.md
@@ -140,8 +140,8 @@ Platforms: linux/amd64, linux/arm64, linux/arm/v7,
linux/arm/v6
```
Now goto `<HUDI_REPO_DIR>/docker/hoodie/hadoop` and change the `Dockerfile` to
pull dependent images corresponding to
-arm64. For example, in [base/Dockerfile](./hoodie/hadoop/base/Dockerfile)
(which pulls jdk8 image), change the
-line `FROM openjdk:8u212-jdk-slim-stretch` to `FROM
arm64v8/openjdk:8u212-jdk-slim-stretch`.
+arm64. For example, in [base/Dockerfile](./hoodie/hadoop/base/Dockerfile)
(which pulls jdk11 image), change the
+line `FROM openjdk:11-jdk-slim-bullseye` to `FROM
arm64v8/openjdk:11-jdk-slim-bullseye`.
Then, from under `<HUDI_REPO_DIR>/docker/hoodie/hadoop` directory, execute the
following command to build as well as
push the image to the dockerhub repo:
diff --git a/hudi-kafka-connect/README.md b/hudi-kafka-connect/README.md
index 61c5b5d15a81..f1811717f32e 100644
--- a/hudi-kafka-connect/README.md
+++ b/hudi-kafka-connect/README.md
@@ -26,7 +26,7 @@ This is work is tracked by
[HUDI-2324](https://issues.apache.org/jira/browse/HUD
The first thing you need to do to start using this connector is building it.
In order to do that, you need to install the following dependencies:
-- [Java 1.8+](https://openjdk.java.net/)
+- [Java 11+](https://openjdk.java.net/)
- [Apache Maven](https://maven.apache.org/)
- Install [kcat](https://github.com/edenhill/kcat)
- Install jq. `brew install jq`
diff --git
a/hudi-spark-datasource/hudi-spark/src/test/resources/upgrade-downgrade-fixtures/README.md
b/hudi-spark-datasource/hudi-spark/src/test/resources/upgrade-downgrade-fixtures/README.md
index 1890e25bf2cc..245c9f605914 100644
---
a/hudi-spark-datasource/hudi-spark/src/test/resources/upgrade-downgrade-fixtures/README.md
+++
b/hudi-spark-datasource/hudi-spark/src/test/resources/upgrade-downgrade-fixtures/README.md
@@ -50,7 +50,7 @@ All fixture tables use a consistent simple schema:
## Generating Fixtures
### Prerequisites
-- Java 8+ installed
+- Java 11+ installed
- Internet connection (for downloading Spark binaries and Hudi bundles via
Maven)
### Generation Process
diff --git a/release/release_guide.md b/release/release_guide.md
index 3b18354c58b4..7fa77e44bd7f 100644
--- a/release/release_guide.md
+++ b/release/release_guide.md
@@ -408,9 +408,9 @@ Set up a few environment variables to simplify Maven
commands that follow. This
1. This will deploy jar artifacts to the Apache Nexus Repository, which is
the staging area for deploying jars to Maven Central.
2. Review all staged artifacts (https://repository.apache.org/). They
should contain all relevant parts for each module, including pom.xml, jar, test
jar, source, test source, javadoc, etc. Carefully review any new artifacts.
3. git checkout ${RELEASE_BRANCH}
- 4. Given that certain bundle jars are built by Java 11 (Flink 2.0 bundle)
and Java 17 (Spark 4 bundle), multiple
+ 4. Given that certain bundle jars are built by Java 17 (Spark 4 bundle),
multiple
scripts need to be run
- 1. For most modules with Java 8 build, run `export
JAVA_HOME=$(/usr/libexec/java_home -v 1.8)` and
+ 1. For most modules with Java 11 build, run `export
JAVA_HOME=$(/usr/libexec/java_home -v 11)` and
`/scripts/release/deploy_staging_jars.sh 2>&1 | tee -a
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy1.log"`
1. when prompted for the passphrase, if you have multiple gpg keys
in your keyring, make sure that you enter
the right passphase corresponding to the same key (FINGERPRINT)
as used while generating source release in
@@ -423,11 +423,9 @@ Set up a few environment variables to simplify Maven
commands that follow. This
5. In case you faced any issue while building
`hudi-platform-service` or `hudi-metaserver-server` module,
please ensure that you have docker daemon running. This is
required to build `hudi-metaserver-server`
module. See [checklist](#checklist-to-proceed-to-the-next-step).
- 2. Continue with Java 11 build, run `export
JAVA_HOME=$(/usr/libexec/java_home -v 11)` and
- `/scripts/release/deploy_staging_jars_java11.sh 2>&1 | tee -a
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy2.log"`
- 3. Continue with Java 17 build, run `export
JAVA_HOME=$(/usr/libexec/java_home -v 17)` and
- `/scripts/release/deploy_staging_jars_java17.sh 2>&1 | tee -a
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy3.log"`
- 5. Note that the artifacts from Java 11 and 17 builds are uploaded to
separate staging repos. You need to manually
+ 2. Continue with Java 17 build for Spark 4 bundle, run `export
JAVA_HOME=$(/usr/libexec/java_home -v 17)` and
+ `/scripts/release/deploy_staging_jars_java17.sh 2>&1 | tee -a
"/tmp/${RELEASE_VERSION}-${RC_NUM}.deploy2.log"`
+ 5. Note that the artifacts from Java 17 build are uploaded to a separate
staging repo. You need to manually
download those artifacts and upload them to the first staging repo so
that all artifacts stay in the same repo.
6. Review all staged artifacts by logging into Apache Nexus and clicking on
"Staging Repositories" link on left pane.
Then find a "open" entry for apachehudi