LuciferYang commented on code in PR #39962:
URL: https://github.com/apache/spark/pull/39962#discussion_r1108460868
##########
core/src/main/scala/org/apache/spark/scheduler/ExecutorDecommissionInfo.scala:
##########
@@ -25,7 +25,19 @@ package org.apache.spark.scheduler
* shuffle data might be lost even if the external shuffle
service is enabled.
*/
private[spark]
-case class ExecutorDecommissionInfo(message: String, workerHost:
Option[String] = None)
+abstract class DecommissionInfo
Review Comment:
If all subclasses must to be defined in this file, maybe `sealed`?
##########
core/src/main/scala/org/apache/spark/scheduler/ExecutorDecommissionInfo.scala:
##########
@@ -25,7 +25,19 @@ package org.apache.spark.scheduler
* shuffle data might be lost even if the external shuffle
service is enabled.
*/
private[spark]
-case class ExecutorDecommissionInfo(message: String, workerHost:
Option[String] = None)
+abstract class DecommissionInfo
+
+private[spark]
+case class ExecutorDecommissionInfo(
Review Comment:
hmm... Who triggered this? And I think we should add some comments for
these two new classes
##########
core/src/main/scala/org/apache/spark/ExecutorAllocationClient.scala:
##########
@@ -111,21 +111,24 @@ private[spark] trait ExecutorAllocationClient {
* @param executorId identifiers of executor to decommission
* @param decommissionInfo information about the decommission (reason, host
loss)
* @param adjustTargetNumExecutors if we should adjust the target number of
executors.
- * @param triggeredByExecutor whether the decommission is triggered at
executor.
- * (TODO: add a new type like
`ExecutorDecommissionInfo` for the
- * case where executor is decommissioned at
executor first, so we
- * don't need this extra parameter.)
* @return whether the request is acknowledged by the cluster manager.
*/
final def decommissionExecutor(
executorId: String,
- decommissionInfo: ExecutorDecommissionInfo,
- adjustTargetNumExecutors: Boolean,
- triggeredByExecutor: Boolean = false): Boolean = {
- val decommissionedExecutors = decommissionExecutors(
- Array((executorId, decommissionInfo)),
- adjustTargetNumExecutors = adjustTargetNumExecutors,
- triggeredByExecutor = triggeredByExecutor)
+ decommissionInfo: DecommissionInfo,
+ adjustTargetNumExecutors: Boolean): Boolean = {
+ val decommissionedExecutors = decommissionInfo match {
+ case _: ExecutorDecommissionInfo =>
+ decommissionExecutors(
Review Comment:
A little confused. Seems `decommissionExecutors` don't use
`triggeredByExecutor` flag?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]