mridulm commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r883677191


##########
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##########
@@ -1217,6 +1289,61 @@ private[spark] class TaskSetManager(
   def executorAdded(): Unit = {
     recomputeLocality()
   }
+
+  /**
+   * A class for checking inefficient tasks to be speculated, the inefficient 
tasks come from
+   * the tasks which may be speculated by the previous strategy.
+   */
+  private[scheduler] class InefficientTaskCalculator {
+    var taskProgressThreshold = 0.0
+    var updateSealed = false
+    private var lastComputeMs = -1L
+
+    def maybeRecompute(nowMs: Long): Unit = {
+      if (!updateSealed && (lastComputeMs <= 0 ||
+        nowMs > lastComputeMs + speculationTaskStatsCacheInterval)) {
+        var successRecords = 0L
+        var successRunTime = 0L
+        var numSuccessTasks = 0L
+        taskInfos.values.filter(_.status == "SUCCESS").foreach { taskInfo =>
+          successRecords += taskInfo.successRecords
+          successRunTime += taskInfo.successRunTime
+          numSuccessTasks += 1
+        }

Review Comment:
   Note, for existing speculative execution, we simply do similar to 
`taskInfos.values` - that is, take all successful tasks into account. That 
should be fine too - I am still trying to think through the implications of 
both.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to