Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/19045#discussion_r219000926
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/ExecutorLossReason.scala ---
@@ -58,3 +58,11 @@ private [spark] object LossReasonPending extends
ExecutorLossReason("Pending los
private[spark]
case class SlaveLost(_message: String = "Slave lost", workerLost: Boolean
= false)
extends ExecutorLossReason(_message)
+
+/**
+ * A loss reason that means the worker is marked for decommissioning.
+ *
+ * This is used by the task scheduler to remove state associated with the
executor, but
+ * not yet fail any tasks that were running in the executor before the
executor is "fully" lost.
+ */
+private [spark] object WorkerDecommission extends
ExecutorLossReason("Worker Decommission.")
--- End diff --
Look at Master.scala (
https://github.com/apache/spark/pull/19045/files#diff-29dffdccd5a7f4c8b496c293e87c8668R243
)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]