cxzl25 commented on issue #24497: [SPARK-27630][CORE]Stage retry causes 
totalRunningTasks calculation to be negative
URL: https://github.com/apache/spark/pull/24497#issuecomment-489190236
 
 
   Thank you for your suggestions and help. @squito 
   
   ```ExecutorAllocationListener``` stores the stage related information, the 
hash map key is the stage id, and there is no stage attempt id, the statistical 
information is for the current active stage.
   
   
https://github.com/apache/spark/blob/6c2d351f5466d42c4d227f5627bd3709c266b5ce/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala#L648-L663
   
   In SPARK-11334, I saw a description added to stageIdToNumRunningTask
   >Number of running tasks per stage including speculative tasks.
   >Should be 0 when no stages are active.
   
   If the stage's attempts are all zombie, their information should not be 
counted to the current active stage.
   I tried adding a little comment on the totalRunningTasks, but it might be 
simpler.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to