skonto commented on a change in pull request #23599: [SPARK-24793][K8s] Enhance
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#discussion_r249609476
##########
File path: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
##########
@@ -95,20 +95,66 @@ private[spark] class SparkSubmit extends Logging {
}
/**
- * Kill an existing submission using the REST protocol. Standalone and Mesos
cluster mode only.
+ * Kill an existing submission.
+ * Standalone, Kubernetes and Mesos cluster mode only.
*/
private def kill(args: SparkSubmitArguments): Unit = {
- new RestSubmissionClient(args.master)
- .killSubmission(args.submissionToKill)
+ if (args.master.startsWith("k8s://")) {
+ try {
+ val ops =
Utils.classForName("org.apache.spark.deploy.k8s.submit.K8sSubmitOps$")
Review comment:
There are two approaches for cluster mode, one that uses a rest
server/client and the other that uses whatever the env provides. For k8s there
is no rest api to use directly. We communicate to the k8s api server via the
fabric8io client lib later on. Reflection is needed because we need to access
classes which are not by default available in every Spark build. For example if
a build does not target k8s then these classes will not be available. Also
Spark-submit code is independent of the resource manager dependencies so we
should not reference in there any types that come from the K8s backend.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]