This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 3aa0cd4  [SPARK-38302][K8S][TESTS] Use `Java 17` in K8S IT in case of 
`spark-tgz` option
3aa0cd4 is described below

commit 3aa0cd4cd3ffbfa68e26d5d3128bda3cd4c2bc7d
Author: Qian.Sun <[email protected]>
AuthorDate: Sat Feb 26 15:58:11 2022 -0800

    [SPARK-38302][K8S][TESTS] Use `Java 17` in K8S IT in case of `spark-tgz` 
option
    
    ### What changes were proposed in this pull request?
    
    This PR aims to use Java 17 in K8s integration tests by default when 
setting spark-tgz.
    
    ### Why are the changes needed?
    
    When setting parameters `spark-tgz` during integration tests, the error 
that 
`resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile.java17`
 cannot be found occurs.
    
    This is due to the default value of `spark.kubernetes.test.dockerFile` is a 
[relative 
path](https://github.com/apache/spark/blob/master/resource-managers/kubernetes/integration-tests/pom.xml#L46).
    
    When using the tgz, the working directory is 
[`$UNPACKED_SPARK_TGZ`](https://github.com/apache/spark/blob/master/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh#L90),
 and the relative path is invalid.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No
    
    ### How was this patch tested?
    
    Runing k8s integration test manaully:
    #### sbt
    ```shell
    $ build/sbt -Pkubernetes -Pkubernetes-integration-tests 
-Dtest.exclude.tags=minikube,r "kubernetes-integration-tests/test"
    
    KubernetesSuite:
    - Run SparkPi with no resources
    - Run SparkPi with no resources & statefulset allocation
    - Run SparkPi with a very long application name.
    - Use SparkLauncher.NO_RESOURCE
    - Run SparkPi with a master URL without a scheme.
    - Run SparkPi with an argument.
    - Run SparkPi with custom labels, annotations, and environment variables.
    - All pods have the same service account by default
    - Run extraJVMOptions check on driver
    - Run SparkRemoteFileTest using a remote data file
    - Verify logging configuration is picked from the provided 
SPARK_CONF_DIR/log4j2.properties
    - Run SparkPi with env and mount secrets.
    - Run PySpark on simple pi.py example
    - Run PySpark to test a pyfiles example
    - Run PySpark with memory customization
    - Run in client mode.
    - Start pod creation from template
    - PVs with local hostpath storage on statefulsets
    - PVs with local hostpath and storageClass on statefulsets
    - PVs with local storage
    - Launcher client dependencies
    - SPARK-33615: Launcher client archives
    - SPARK-33748: Launcher python client respecting PYSPARK_PYTHON
    - SPARK-33748: Launcher python client respecting spark.pyspark.python and 
spark.pyspark.driver.python
    - Launcher python client dependencies using a zip file
    - Test basic decommissioning
    - Test basic decommissioning with shuffle cleanup
    - Test decommissioning with dynamic allocation & shuffle cleanups
    - Test decommissioning timeouts
    - SPARK-37576: Rolling decommissioning
    Run completed in 27 minutes, 8 seconds.
    Total number of tests run: 30
    Suites: completed 2, aborted 0
    Tests: succeeded 30, failed 0, canceled 0, ignored 0, pending 0
    All tests passed.
    ```
    #### maven with spark-tgz
    ```shell
    $ bash 
resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh 
--spark-tgz $TARBALL_TO_TEST --exclude-tags r
    
    KubernetesSuite:
    - Run SparkPi with no resources
    - Run SparkPi with no resources & statefulset allocation
    - Run SparkPi with a very long application name.
    - Use SparkLauncher.NO_RESOURCE
    - Run SparkPi with a master URL without a scheme.
    - Run SparkPi with an argument.
    - Run SparkPi with custom labels, annotations, and environment variables.
    - All pods have the same service account by default
    - Run extraJVMOptions check on driver
    - Run SparkRemoteFileTest using a remote data file
    - Verify logging configuration is picked from the provided 
SPARK_CONF_DIR/log4j2.properties
    - Run SparkPi with env and mount secrets.
    - Run PySpark on simple pi.py example
    - Run PySpark to test a pyfiles example
    - Run PySpark with memory customization
    - Run in client mode.
    - Start pod creation from template
    - PVs with local hostpath storage on statefulsets
    - PVs with local hostpath and storageClass on statefulsets
    - PVs with local storage
    - Launcher client dependencies
    - SPARK-33615: Launcher client archives
    - SPARK-33748: Launcher python client respecting PYSPARK_PYTHON
    - SPARK-33748: Launcher python client respecting spark.pyspark.python and 
spark.pyspark.driver.python
    - Launcher python client dependencies using a zip file
    - Test basic decommissioning
    - Test basic decommissioning with shuffle cleanup
    - Test decommissioning with dynamic allocation & shuffle cleanups
    - Test decommissioning timeouts
    - SPARK-37576: Rolling decommissioning
    Run completed in 30 minutes, 6 seconds.
    Total number of tests run: 30
    Suites: completed 2, aborted 0
    Tests: succeeded 30, failed 0, canceled 0, ignored 0, pending 0
    All tests passed.
    ```
    #### maven without spark-tgz
    ```shell
    $ bash 
resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh 
--exclude-tags r
    
    KubernetesSuite:
    - Run SparkPi with no resources
    - Run SparkPi with no resources & statefulset allocation
    - Run SparkPi with a very long application name.
    - Use SparkLauncher.NO_RESOURCE
    - Run SparkPi with a master URL without a scheme.
    - Run SparkPi with an argument.
    - Run SparkPi with custom labels, annotations, and environment variables.
    - All pods have the same service account by default
    - Run extraJVMOptions check on driver
    - Run SparkRemoteFileTest using a remote data file
    - Verify logging configuration is picked from the provided 
SPARK_CONF_DIR/log4j2.properties
    - Run SparkPi with env and mount secrets.
    - Run PySpark on simple pi.py example
    - Run PySpark to test a pyfiles example
    - Run PySpark with memory customization
    - Run in client mode.
    - Start pod creation from template
    - PVs with local hostpath storage on statefulsets
    - PVs with local hostpath and storageClass on statefulsets
    - PVs with local storage
    - Launcher client dependencies
    - SPARK-33615: Launcher client archives
    - SPARK-33748: Launcher python client respecting PYSPARK_PYTHON
    - SPARK-33748: Launcher python client respecting spark.pyspark.python and 
spark.pyspark.driver.python
    - Launcher python client dependencies using a zip file
    - Test basic decommissioning
    - Test basic decommissioning with shuffle cleanup
    - Test decommissioning with dynamic allocation & shuffle cleanups
    - Test decommissioning timeouts
    - SPARK-37576: Rolling decommissioning
    Run completed in 35 minutes, 0 seconds.
    Total number of tests run: 30
    Suites: completed 2, aborted 0
    Tests: succeeded 30, failed 0, canceled 0, ignored 0, pending 0
    All tests passed.
    ```
    
    Closes #35627 from dcoliversun/SPARK-38302.
    
    Authored-by: Qian.Sun <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 project/SparkBuild.scala                                            | 2 +-
 resource-managers/kubernetes/integration-tests/pom.xml              | 2 +-
 .../integration-tests/scripts/setup-integration-test-env.sh         | 6 +++++-
 3 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/project/SparkBuild.scala b/project/SparkBuild.scala
index e9ef514..0f06e6b 100644
--- a/project/SparkBuild.scala
+++ b/project/SparkBuild.scala
@@ -645,7 +645,7 @@ object KubernetesIntegrationTests {
         val bindingsDir = 
s"$sparkHome/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/bindings"
         val javaImageTag = sys.props.get("spark.kubernetes.test.javaImageTag")
         val dockerFile = 
sys.props.getOrElse("spark.kubernetes.test.dockerFile",
-            
"resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile.java17")
+            
s"$sparkHome/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile.java17")
         val extraOptions = if (javaImageTag.isDefined) {
           Seq("-b", s"java_image_tag=$javaImageTag")
         } else {
diff --git a/resource-managers/kubernetes/integration-tests/pom.xml 
b/resource-managers/kubernetes/integration-tests/pom.xml
index 0bc8508..318a903 100644
--- a/resource-managers/kubernetes/integration-tests/pom.xml
+++ b/resource-managers/kubernetes/integration-tests/pom.xml
@@ -43,7 +43,7 @@
     <spark.kubernetes.test.master></spark.kubernetes.test.master>
     <spark.kubernetes.test.namespace></spark.kubernetes.test.namespace>
     
<spark.kubernetes.test.serviceAccountName></spark.kubernetes.test.serviceAccountName>
-    
<spark.kubernetes.test.dockerFile>resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile.java17</spark.kubernetes.test.dockerFile>
+    
<spark.kubernetes.test.dockerFile>Dockerfile.java17</spark.kubernetes.test.dockerFile>
 
     <test.exclude.tags></test.exclude.tags>
     <test.include.tags></test.include.tags>
diff --git 
a/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh
 
b/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh
index e4a92b6..d896034 100755
--- 
a/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh
+++ 
b/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh
@@ -106,7 +106,11 @@ then
     # OpenJDK base-image tag (e.g. 8-jre-slim, 11-jre-slim)
     JAVA_IMAGE_TAG_BUILD_ARG="-b java_image_tag=$JAVA_IMAGE_TAG"
   else
-    JAVA_IMAGE_TAG_BUILD_ARG="-f $DOCKER_FILE"
+    if [[ $DOCKER_FILE = /* ]]; then
+      JAVA_IMAGE_TAG_BUILD_ARG="-f $DOCKER_FILE"
+    else
+      JAVA_IMAGE_TAG_BUILD_ARG="-f $DOCKER_FILE_BASE_PATH/$DOCKER_FILE"
+    fi
   fi
 
   # Build PySpark image

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to