dongjoon-hyun commented on code in PR #41077:
URL: https://github.com/apache/spark/pull/41077#discussion_r1187692239


##########
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala:
##########
@@ -1086,11 +1086,19 @@ private[spark] class TaskSchedulerImpl(
     case ExecutorKilled =>
       logInfo(s"Executor $executorId on $hostPort killed by driver.")
     case _: ExecutorDecommission =>
-      logInfo(s"Executor $executorId on $hostPort is decommissioned.")
+      logInfo(s"Executor $executorId on $hostPort is decommissioned after " +
+        s"${getDecommissionDuration(executorId)}.")
     case _ =>
       logError(s"Lost executor $executorId on $hostPort: $reason")
   }
 
+  // return decommission duration in string or "unknown time" if decommission 
startTime not exists
+  private def getDecommissionDuration(executorId: String): String = {
+    executorsPendingDecommission.get(executorId)
+      .map(s => Utils.msDurationToString(clock.getTimeMillis() - s.startTime))
+      .getOrElse("unknown time")

Review Comment:
   This could give a not-good impression to the users because it's interpreted 
as a Spark bug by the users. Can we give more informative message to avoid 
that? Or, can we keep the original message in this case?
   
   For example,
   - When we know the time, `Executor $executorId on $hostPort is 
decommissioned after ...`
   - When we don't know, `Executor $executorId on $hostPort is decommissioned.` 
(the same message like before)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to