sarutak commented on a change in pull request #33253:
URL: https://github.com/apache/spark/pull/33253#discussion_r680623875
##########
File path: core/src/main/scala/org/apache/spark/status/AppStatusListener.scala
##########
@@ -1208,6 +1232,33 @@ private[spark] class AppStatusListener(
}
}
+ private def killedTaskSummaryForSpeculationStageSummary(
+ reason: TaskEndReason,
+ oldSummary: Map[String, Int],
+ isSpeculative: Boolean): Map[String, Int] = {
+ reason match {
+ case k: TaskKilled if k.reason.contains("another attempt succeeded") =>
+ if (isSpeculative) {
+ oldSummary.updated("original attempt succeeded",
+ oldSummary.getOrElse("original attempt succeeded", 0) + 1)
+ } else {
+ oldSummary.updated("speculated attempt succeeded",
+ oldSummary.getOrElse("speculated attempt succeeded", 0) + 1)
+ }
+ // If the stage is finished and speculative tasks get killed, then the
+ // kill reason is "stage finished"
+ case k: TaskKilled if k.reason.contains("Stage finished") =>
+ if (isSpeculative) {
+ oldSummary.updated("original attempt succeeded",
+ oldSummary.getOrElse("original attempt succeeded", 0) + 1)
+ } else {
+ oldSummary
Review comment:
> not sure if the task which gets killed with stage finished is not
speculative then we can safely increment the count for speculative attempt
succeeded or not. WDYT?
Hmm, it seems difficult to judge that one attempt succeeded because the
other fails (Both attempts can fail).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]