Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/20669#discussion_r175520457
--- Diff:
resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh ---
@@ -53,14 +53,10 @@ fi
case "$SPARK_K8S_CMD" in
driver)
CMD=(
- ${JAVA_HOME}/bin/java
- "${SPARK_JAVA_OPTS[@]}"
- -cp "$SPARK_CLASSPATH"
- -Xms$SPARK_DRIVER_MEMORY
- -Xmx$SPARK_DRIVER_MEMORY
- -Dspark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS
- $SPARK_DRIVER_CLASS
- $SPARK_DRIVER_ARGS
+ "$SPARK_HOME/bin/spark-submit"
+ --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS"
+ --deploy-mode client
+ "$@"
--- End diff --
> The point is, even if entrypoint.sh defines the contract, what arguments
would a custom implementation pass it?
`entrypoint.sh` doesn't really define the contract, it implements the
contract. Basically the entry point has to respect whatever the Spark submit
code tells it to do.
So define the contract first. If you want users to be able to write their
own custom entry points, then document that contract in a public doc, so that
it cannot change in future versions of Spark. That was the main reason (well,
one of them) why I asked this whole thing to be marked "experimental" in 2.3 -
nothing is defined, but there seems to be a lot of tribal knowledge about what
people want to do.
That doesn't really work in the long run.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]