skonto commented on a change in pull request #26161: [SPARK-27900][CORE][K8s] 
Add `spark.driver.killOnOOMError` flag in cluster mode
URL: https://github.com/apache/spark/pull/26161#discussion_r339359609
 
 

 ##########
 File path: 
resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh
 ##########
 @@ -60,11 +60,24 @@ if ! [ -z ${HADOOP_CONF_DIR+x} ]; then
   SPARK_CLASSPATH="$HADOOP_CONF_DIR:$SPARK_CLASSPATH";
 fi
 
+DRIVER_VERBOSE=${DRIVER_VERBOSE:-false}
+
+function get_verbose_flag()
+{
+  if [[ $DRIVER_VERBOSE == "true" ]]; then
+    echo "--verbose"
+  else
+    echo ""
+  fi
+}
+
 case "$1" in
   driver)
     shift 1
+    VERBOSE_FLAG=$(get_verbose_flag)
     CMD=(
       "$SPARK_HOME/bin/spark-submit"
+      $VERBOSE_FLAG
       --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS"
       --deploy-mode client
       "$@"
 
 Review comment:
   @dongjoon-hyun this is required by the tests, so it could be just DEBUG 
mode, disabled by default. It is not meant to be another feature and does no 
harm. I want to debug the verbose output so tests in K8s can get the java 
option values set in the driver. Is there another way to trigger this (beyond 
writing a main that prints them and adding it to Spark examples package)?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to