This is an automated email from the ASF dual-hosted git repository. dongjoon pushed a commit to branch branch-3.0 in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.0 by this push: new 5063cd9 [SPARK-30807][K8S][TESTS] Support Java 11 in K8S integration tests 5063cd9 is described below commit 5063cd93c8d76a6d9650d1243bf5e1cea8da1d94 Author: Dongjoon Hyun <dh...@apple.com> AuthorDate: Thu Feb 13 11:17:27 2020 -0800 [SPARK-30807][K8S][TESTS] Support Java 11 in K8S integration tests ### What changes were proposed in this pull request? This PR aims to support JDK11 test in K8S integration tests. - This is an update in testing framework instead of individual tests. - This will enable JDK11 runtime test when you didn't installed JDK11 on your local system. ### Why are the changes needed? Apache Spark 3.0.0 adds JDK11 support, but K8s integration tests use JDK8 until now. ### Does this PR introduce any user-facing change? No. This is a dev-only test-related PR. ### How was this patch tested? This is irrelevant to Jenkins UT, but Jenkins K8S IT (JDK8) should pass. - https://github.com/apache/spark/pull/27559#issuecomment-585903489 (JDK8 Passed) And, manually do the following for JDK11 test. ``` $ NO_MANUAL=1 ./dev/make-distribution.sh --r --pip --tgz -Phadoop-3.2 -Pkubernetes $ resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh --java-image-tag 11-jre-slim --spark-tgz $PWD/spark-*.tgz ``` ``` $ docker run -it --rm kubespark/spark:1318DD8A-2B15-4A00-BC69-D0E90CED235B /usr/local/openjdk-11/bin/java --version | tail -n1 OpenJDK 64-Bit Server VM 18.9 (build 11.0.6+10, mixed mode) ``` Closes #27559 from dongjoon-hyun/SPARK-30807. Authored-by: Dongjoon Hyun <dh...@apple.com> Signed-off-by: Dongjoon Hyun <dh...@apple.com> (cherry picked from commit 859699135cb63b57f5d844e762070065cedb4408) Signed-off-by: Dongjoon Hyun <dh...@apple.com> --- .../docker/src/main/dockerfiles/spark/Dockerfile | 3 ++- resource-managers/kubernetes/integration-tests/README.md | 15 +++++++++++++-- .../integration-tests/dev/dev-run-integration-tests.sh | 10 ++++++++++ resource-managers/kubernetes/integration-tests/pom.xml | 4 ++++ .../scripts/setup-integration-test-env.sh | 14 +++++++++++--- 5 files changed, 40 insertions(+), 6 deletions(-) diff --git a/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile b/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile index a1fc637..6ed37fc 100644 --- a/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile +++ b/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile @@ -14,8 +14,9 @@ # See the License for the specific language governing permissions and # limitations under the License. # +ARG java_image_tag=8-jre-slim -FROM openjdk:8-jre-slim +FROM openjdk:${java_image_tag} ARG spark_uid=185 diff --git a/resource-managers/kubernetes/integration-tests/README.md b/resource-managers/kubernetes/integration-tests/README.md index d7ad35a..18b9191 100644 --- a/resource-managers/kubernetes/integration-tests/README.md +++ b/resource-managers/kubernetes/integration-tests/README.md @@ -6,13 +6,17 @@ title: Spark on Kubernetes Integration Tests # Running the Kubernetes Integration Tests Note that the integration test framework is currently being heavily revised and -is subject to change. Note that currently the integration tests only run with Java 8. +is subject to change. The simplest way to run the integration tests is to install and run Minikube, then run the following from this directory: ./dev/dev-run-integration-tests.sh +To run tests with Java 11 instead of Java 8, use `--java-image-tag` to specify the base image. + + ./dev/dev-run-integration-tests.sh --java-image-tag 11-jre-slim + The minimum tested version of Minikube is 0.23.0. The kube-dns addon must be enabled. Minikube should run with a minimum of 4 CPUs and 6G of memory: @@ -183,7 +187,14 @@ to the wrapper scripts and using the wrapper scripts will simply set these appro A specific image tag to use, when set assumes images with those tags are already built and available in the specified image repository. When set to <code>N/A</code> (the default) fresh images will be built. </td> - <td><code>N/A</code> + <td><code>N/A</code></td> + </tr> + <tr> + <td><code>spark.kubernetes.test.javaImageTag</code></td> + <td> + A specific OpenJDK base image tag to use, when set uses it instead of 8-jre-slim. + </td> + <td><code>8-jre-slim</code></td> </tr> <tr> <td><code>spark.kubernetes.test.imageTagFile</code></td> diff --git a/resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh b/resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh index 1f0a803..76d6e1c 100755 --- a/resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh +++ b/resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh @@ -23,6 +23,7 @@ DEPLOY_MODE="minikube" IMAGE_REPO="docker.io/kubespark" SPARK_TGZ="N/A" IMAGE_TAG="N/A" +JAVA_IMAGE_TAG= BASE_IMAGE_NAME= JVM_IMAGE_NAME= PYTHON_IMAGE_NAME= @@ -52,6 +53,10 @@ while (( "$#" )); do IMAGE_TAG="$2" shift ;; + --java-image-tag) + JAVA_IMAGE_TAG="$2" + shift + ;; --deploy-mode) DEPLOY_MODE="$2" shift @@ -120,6 +125,11 @@ properties=( -Dtest.include.tags=$INCLUDE_TAGS ) +if [ -n "$JAVA_IMAGE_TAG" ]; +then + properties=( ${properties[@]} -Dspark.kubernetes.test.javaImageTag=$JAVA_IMAGE_TAG ) +fi + if [ -n $NAMESPACE ]; then properties=( ${properties[@]} -Dspark.kubernetes.test.namespace=$NAMESPACE ) diff --git a/resource-managers/kubernetes/integration-tests/pom.xml b/resource-managers/kubernetes/integration-tests/pom.xml index 8e1043f..369dfd4 100644 --- a/resource-managers/kubernetes/integration-tests/pom.xml +++ b/resource-managers/kubernetes/integration-tests/pom.xml @@ -39,6 +39,7 @@ <spark.kubernetes.test.sparkTgz></spark.kubernetes.test.sparkTgz> <spark.kubernetes.test.unpackSparkDir>${project.build.directory}/spark-dist-unpacked</spark.kubernetes.test.unpackSparkDir> <spark.kubernetes.test.imageTag>N/A</spark.kubernetes.test.imageTag> + <spark.kubernetes.test.javaImageTag>8-jre-slim</spark.kubernetes.test.javaImageTag> <spark.kubernetes.test.imageTagFile>${project.build.directory}/imageTag.txt</spark.kubernetes.test.imageTagFile> <spark.kubernetes.test.deployMode>minikube</spark.kubernetes.test.deployMode> <spark.kubernetes.test.imageRepo>docker.io/kubespark</spark.kubernetes.test.imageRepo> @@ -109,6 +110,9 @@ <argument>--image-tag</argument> <argument>${spark.kubernetes.test.imageTag}</argument> + <argument>--java-image-tag</argument> + <argument>${spark.kubernetes.test.javaImageTag}</argument> + <argument>--image-tag-output-file</argument> <argument>${spark.kubernetes.test.imageTagFile}</argument> diff --git a/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh b/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh index 9e04b96..ab90660 100755 --- a/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh +++ b/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh @@ -23,6 +23,7 @@ IMAGE_TAG_OUTPUT_FILE="$TEST_ROOT_DIR/target/image-tag.txt" DEPLOY_MODE="minikube" IMAGE_REPO="docker.io/kubespark" IMAGE_TAG="N/A" +JAVA_IMAGE_TAG="8-jre-slim" SPARK_TGZ="N/A" # Parse arguments @@ -40,6 +41,10 @@ while (( "$#" )); do IMAGE_TAG="$2" shift ;; + --java-image-tag) + JAVA_IMAGE_TAG="$2" + shift + ;; --image-tag-output-file) IMAGE_TAG_OUTPUT_FILE="$2" shift @@ -82,6 +87,9 @@ then IMAGE_TAG=$(uuidgen); cd $SPARK_INPUT_DIR + # OpenJDK base-image tag (e.g. 8-jre-slim, 11-jre-slim) + JAVA_IMAGE_TAG_BUILD_ARG="-b java_image_tag=$JAVA_IMAGE_TAG" + # Build PySpark image LANGUAGE_BINDING_BUILD_ARGS="-p $DOCKER_FILE_BASE_PATH/bindings/python/Dockerfile" @@ -95,7 +103,7 @@ then case $DEPLOY_MODE in cloud) # Build images - $SPARK_INPUT_DIR/bin/docker-image-tool.sh -r $IMAGE_REPO -t $IMAGE_TAG $LANGUAGE_BINDING_BUILD_ARGS build + $SPARK_INPUT_DIR/bin/docker-image-tool.sh -r $IMAGE_REPO -t $IMAGE_TAG $JAVA_IMAGE_TAG_BUILD_ARG $LANGUAGE_BINDING_BUILD_ARGS build # Push images appropriately if [[ $IMAGE_REPO == gcr.io* ]] ; @@ -109,13 +117,13 @@ then docker-for-desktop) # Only need to build as this will place it in our local Docker repo which is all # we need for Docker for Desktop to work so no need to also push - $SPARK_INPUT_DIR/bin/docker-image-tool.sh -r $IMAGE_REPO -t $IMAGE_TAG $LANGUAGE_BINDING_BUILD_ARGS build + $SPARK_INPUT_DIR/bin/docker-image-tool.sh -r $IMAGE_REPO -t $IMAGE_TAG $JAVA_IMAGE_TAG_BUILD_ARG $LANGUAGE_BINDING_BUILD_ARGS build ;; minikube) # Only need to build and if we do this with the -m option for minikube we will # build the images directly using the minikube Docker daemon so no need to push - $SPARK_INPUT_DIR/bin/docker-image-tool.sh -m -r $IMAGE_REPO -t $IMAGE_TAG $LANGUAGE_BINDING_BUILD_ARGS build + $SPARK_INPUT_DIR/bin/docker-image-tool.sh -m -r $IMAGE_REPO -t $IMAGE_TAG $JAVA_IMAGE_TAG_BUILD_ARG $LANGUAGE_BINDING_BUILD_ARGS build ;; *) echo "Unrecognized deploy mode $DEPLOY_MODE" && exit 1 --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org