Github user mateiz commented on the pull request:

    https://github.com/apache/spark/pull/8180#issuecomment-137838489
  
    Alright, I think this is ready to review now. Changes made:
    - Added more docs to DAGScheduler about how stages may be re-attempted
    - Added tests on:
      - More complex lineage graphs with multiple map jobs and result jobs 
pending
      - Fetch failures in the above, which also lead to resubmitted stages
      - Executor failure during a stage, and late task completion messages
    - Better handling of getting the MapOutputStatistics object -- now the 
DAGScheduler gets it at a point when it knows the map output tracker has 
outputs for all tasks, whereas before the tracker might lose some outputs by 
the time SparkContext.submitMapStage queries it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to