jiangxb1987 commented on a change in pull request #24497:
[SPARK-27630][CORE]Stage retry causes totalRunningTasks calculation to be
negative
URL: https://github.com/apache/spark/pull/24497#discussion_r284829874
##########
File path: core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
##########
@@ -646,10 +646,9 @@ private[spark] class ExecutorAllocationManager(
private[spark] class ExecutorAllocationListener extends SparkListener {
private val stageIdToNumTasks = new mutable.HashMap[Int, Int]
- // Number of running tasks per stage including speculative tasks.
- // Should be 0 when no stages are active.
- private val stageIdToNumRunningTask = new mutable.HashMap[Int, Int]
private val stageIdToTaskIndices = new mutable.HashMap[Int,
mutable.HashSet[Int]]
+ private val liveTaskIds = new mutable.HashSet[Long]
Review comment:
Which problem does this change aim to address? Do we have different
expectation for 'regular' task and speculative task?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]