Ngone51 commented on issue #24497: [SPARK-27630][CORE]Stage retry causes 
totalRunningTasks calculation to be negative
URL: https://github.com/apache/spark/pull/24497#issuecomment-495106595
 
 
   I think similar problem also exists with `stageIdToTaskIndices` & 
`stageIdToSpeculativeTaskIndices`. Because, currently, we do not clear the 
`indices` when stage re-submit event comes(`SparkListenerStageSubmitted`). 
That's means `indices` would be used across multiple stage attempts until the 
stage complete. As a result of that, when task start/end events comes after 
stage re-submmited, we'll get wrong `indices`, which affect `pendingTasks()`/ 
`maxNumExecutorsNeeded()` counting finally. 
   
   Maybe, we should really use `stageAttemptId` instead of `stageId` ?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to