skonto commented on a change in pull request #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#discussion_r266052315
 
 

 ##########
 File path: docs/running-on-kubernetes.md
 ##########
 @@ -403,6 +403,36 @@ RBAC authorization and how to configure Kubernetes 
service accounts for pods, pl
 [Using RBAC 
Authorization](https://kubernetes.io/docs/admin/authorization/rbac/) and
 [Configure Service Accounts for 
Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/).
 
+## Spark Application Management
+
+Like Mesos and standalone, Kubernetes provides simple application management 
via the spark-submit CLI tool in cluster mode.
+Users can kill a job by providing the submission ID that is printed when 
submitting their job.
+The submission ID follows the format ``namespace:driver-pod-name`` and is 
similar to the driverId found in standalone mode.
+If user omits the namespace then the namespace set in current k8s context is 
used.
+For example if user has set a specific namespace as follows `kubectl config 
set-context minikube --namespace=spark`
+then the `spark` namespace will be used by default. On the other hand, if 
there is no namespace added to the specific context
+then all namespaces will be considered by default. That means operations will 
affect all spark applications.
+Moreover, spark-submit for application management uses the same backend code 
that is used for submitting the driver, so the same properties
+like `spark.kubernetes.context` etc., can be re-used.
+
+For example:
+```bash
+$ spark-submit --kill spark:spark-pi-1547948636094-driver --master 
k8s://https://192.168.2.8:8443
+```
+Users also can list the application status by using the `--status` flag:
+
+```bash
+$ spark-submit --status spark:spark-pi-1547948636094-driver --master  
k8s://https://192.168.2.8:8443
+```
+Both operations support glob patterns. For example user can run:
+```bash
+$ spark-submit --kill spark:spark-pi* --master  k8s://https://192.168.2.8:8443
+```
+The above will kill all application with the specific prefix.
+
+User can specify the grace period for pod termination via the 
`spark.kubernetes.submitGracePeriod` property,
+using `--conf` as means to provide it (default value for all K8s pods is 30 
secs).
 
 Review comment:
   It comes from K8s https://kubernetes.io/docs/concepts/workloads/pods/pod/ 
its in the docs.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to