Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r154549733
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -590,6 +604,11 @@ private[deploy] class SparkSubmitArguments(args:
Seq[String], env: Map[String, S
| the node running the Application
Master via the Secure
| Distributed Cache, for renewing the
login tickets and the
| delegation tokens periodically.
+ |
+ | Kubernetes only:
+ | --kubernetes-namespace NS The namespace in the Kubernetes
cluster within which the
+ | application must be launched. The
namespace must already
+ | exist in the cluster. (Default:
default).
--- End diff --
There are some messages needed to be updated too, e.g,:
| Spark standalone or Mesos with cluster deploy mode only:
| --supervise If given, restarts the driver on
failure.
| --kill SUBMISSION_ID If given, kills the driver specified.
| --status SUBMISSION_ID If given, requests the status of the
driver specified.
From above, k8s supports killing submission and requesting submission
statuses.
| Spark standalone and Mesos only:
| --total-executor-cores NUM Total cores for all executors.
k8s also supports `totalExecutorCores` option.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]