sarutak commented on a change in pull request #33253:
URL: https://github.com/apache/spark/pull/33253#discussion_r680634284
##########
File path: core/src/main/scala/org/apache/spark/status/AppStatusListener.scala
##########
@@ -1208,6 +1232,33 @@ private[spark] class AppStatusListener(
}
}
+ private def killedTaskSummaryForSpeculationStageSummary(
+ reason: TaskEndReason,
+ oldSummary: Map[String, Int],
+ isSpeculative: Boolean): Map[String, Int] = {
+ reason match {
+ case k: TaskKilled if k.reason.contains("another attempt succeeded") =>
+ if (isSpeculative) {
+ oldSummary.updated("original attempt succeeded",
+ oldSummary.getOrElse("original attempt succeeded", 0) + 1)
+ } else {
+ oldSummary.updated("speculated attempt succeeded",
+ oldSummary.getOrElse("speculated attempt succeeded", 0) + 1)
+ }
+ // If the stage is finished and speculative tasks get killed, then the
+ // kill reason is "stage finished"
+ case k: TaskKilled if k.reason.contains("Stage finished") =>
+ if (isSpeculative) {
+ oldSummary.updated("original attempt succeeded",
+ oldSummary.getOrElse("original attempt succeeded", 0) + 1)
+ } else {
+ oldSummary
Review comment:
> Hmm, it seems difficult to judge that one attempt succeeded because
the other fails (Both attempts can fail).
Take back my comment. `Stage finished` is the reason when a task killed but
the stage which the killed task belongs to successfully finished.
https://github.com/apache/spark/blob/2fe12a75206d4dbef6d7678b876c16876136cdd0/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala#L1641-L1675
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]