Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16620
@kayousterhout @squito @markhamstra
Thanks a lot for for the comments. I've already refined accordingly.
I still have one concern:
> If this is a correct description, Iâd argue that (5) is the problem:
that when ShuffleMapTask2 finishes, we should not be updating a bunch of state
in the DAGScheduler saying that thereâs output ready as a result. If Iâm
understanding correctly, thereâs a relatively simple fix to this problem: In
DAGScheduler.scala, in handleTaskCompletion, we should exit (and not update any
state) when the task is from an earlier stage attempt thatâs not the current
active attempt. This can be done by changing the if-statement on line 1141 to
include:
|| stageIdToStage(task.stageId).latestInfo.attemptId != task.stageAttemptId
With above, are we ignoring all the results from old stage attempts?
As @squito mentioned:
> It also can potentially improve performance, since you may submit
downstream stages more quickly, rather than waiting for all tasks in the active
taskset to complete.
Is it maybe beneficial to add up the result from old stage attempts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]