vanzin commented on a change in pull request #23599: [SPARK-24793][K8s] Enhance
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#discussion_r253680370
##########
File path: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
##########
@@ -95,20 +95,66 @@ private[spark] class SparkSubmit extends Logging {
}
/**
- * Kill an existing submission using the REST protocol. Standalone and Mesos
cluster mode only.
+ * Kill an existing submission.
+ * Standalone, Kubernetes and Mesos cluster mode only.
*/
private def kill(args: SparkSubmitArguments): Unit = {
- new RestSubmissionClient(args.master)
- .killSubmission(args.submissionToKill)
+ if (args.master.startsWith("k8s://")) {
+ try {
+ val ops =
Utils.classForName("org.apache.spark.deploy.k8s.submit.K8sSubmitOps$")
Review comment:
I think parts of that change actually pose a similar issue. Basically, the
part that deals with how to stage dependencies for cluster-mode submissions,
since each backend may do something slightly different (like YARN does its own
thing, even if the "copy files to HDFS" part is somewhat similar).
But in this particular case, this big chunk of reflection code is begging to
just be turned into a plugin. You don't need fully hash out that plugin
interface, but at least add the basics of what you're trying to achieve in this
PR to it. We can look at adding more things later as needed.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]