cloud-fan commented on a change in pull request #29788:
URL: https://github.com/apache/spark/pull/29788#discussion_r490759142
##########
File path: core/src/main/scala/org/apache/spark/ExecutorAllocationClient.scala
##########
@@ -88,44 +88,35 @@ private[spark] trait ExecutorAllocationClient {
* Default implementation delegates to kill, scheduler must override
* if it supports graceful decommissioning.
*
- * @param executorsAndDecomInfo identifiers of executors & decom info.
+ * @param executorsAndDecomReason identifiers of executors & decom reason.
* @param adjustTargetNumExecutors whether the target number of executors
will be adjusted down
* after these executors have been
decommissioned.
- * @param triggeredByExecutor whether the decommission is triggered at
executor.
* @return the ids of the executors acknowledged by the cluster manager to
be removed.
*/
def decommissionExecutors(
- executorsAndDecomInfo: Array[(String, ExecutorDecommissionInfo)],
- adjustTargetNumExecutors: Boolean,
- triggeredByExecutor: Boolean): Seq[String] = {
- killExecutors(executorsAndDecomInfo.map(_._1),
+ executorsAndDecomReason: Array[(String, ExecutorDecommissionReason)],
Review comment:
is it possible that different executors have different
`ExecutorDecommissionReason`? If it's not possible, I think we are
over-engineering here.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]