toujours33 commented on code in PR #38711: URL: https://github.com/apache/spark/pull/38711#discussion_r1035806206
########## core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala: ########## @@ -383,8 +383,8 @@ private[spark] class DAGScheduler( /** * Called by the TaskSetManager when it decides a speculative task is needed. */ - def speculativeTaskSubmitted(task: Task[_]): Unit = { - eventProcessLoop.post(SpeculativeTaskSubmitted(task)) + def speculativeTaskSubmitted(task: Task[_], taskIndex: Int = -1): Unit = { + eventProcessLoop.post(SpeculativeTaskSubmitted(task, taskIndex)) Review Comment: I'll check later if it makes a difference when use `partitionId` instead of `taskIndex`. Btw~ even if we use the `partitionId` instead, we can minimize the change of `SpeculativeTaskSubmitted` in `DAGSchedulerEvent`. But the change to developer api `SparkListenerSpeculativeTaskSubmitted` is unavoidable, for `SparkListenerSpeculativeTaskSubmitted` now just take `stageId` and `stageAttemptId` as arguments. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org