Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/9182#discussion_r43281778
--- Diff:
yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -51,6 +51,38 @@ private[spark] abstract class YarnSchedulerBackend(
private implicit val askTimeout = RpcUtils.askRpcTimeout(sc.conf)
+ /** Application ID. Must be set by a subclass before starting the
service */
+ private var appId: ApplicationId = null
+
+ /** Attempt ID. This is unset for client-side schedulers */
+ private var attemptId: Option[ApplicationAttemptId] = None
+
+ /** Scheduler extension services */
+ private val services: SchedulerExtensionServices = new
SchedulerExtensionServices()
+
+ /**
+ * Bind to YARN. This *must* be done before calling [[start()]].
+ *
+ * @param appId YARN application ID
+ * @param attemptId Optional YARN attempt ID
+ */
+ protected def bindToYarn(appId: ApplicationId, attemptId:
Option[ApplicationAttemptId]): Unit = {
+ this.appId = appId
+ this.attemptId = attemptId
+ }
+
+ override def start() {
+ require(appId != null, "application ID unset")
+ val binding = SchedulerExtensionServiceBinding(sc, appId, attemptId)
+ services.start(binding)
+ super.start()
+ }
+
+ override def stop(): Unit = {
+ super.stop()
--- End diff --
always good to be cautious. Now, should I try to be clever in that
finally() and downgrade any service stop exceptions into logged events? That
way, if the superclass did raise something, it wouldn't get lost by a second
exception in the finally clause?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]