GitHub user xuanyuanking opened a pull request:

    https://github.com/apache/spark/pull/20930

    [SPARK-23811][Core] Same tasks' FetchFailed event comes before Success will 
cause child stage never succeed

    ## What changes were proposed in this pull request?
    
    This is a bug caused by abnormal scenario describe below:
    
    ShuffleMapTask 1.0 running, this task will fetch data from ExecutorA
    ExecutorA Lost, trigger `mapOutputTracker.removeOutputsOnExecutor(execId)` 
, shuffleStatus changed.
    Speculative ShuffleMapTask 1.1 start, got a FetchFailed immediately.
    ShuffleMapTask 1 is the last task of its stage, so this stage will never 
succeed because of there's no missing task DAGScheduler can get.
    
    I apply the detailed screenshots in jira comments.
    
    ## How was this patch tested?
    
    Add a new UT in `TaskSetManagerSuite`


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/xuanyuanking/spark SPARK-23811

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/20930.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #20930
    
----
commit 2907075b43eac26c7efbe4aca5f2c037bb5934c2
Author: Yuanjian Li <xyliyuanjian@...>
Date:   2018-03-29T04:50:16Z

    [SPARK-23811][Core] Same tasks' FetchFailed event comes before Success will 
cause child stage never succeed

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to