Li Yuanjian created SPARK-23811:
-----------------------------------

             Summary: Same tasks' FetchFailed event comes before Success will 
cause child stage never succeed
                 Key: SPARK-23811
                 URL: https://issues.apache.org/jira/browse/SPARK-23811
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.3.0, 2.2.0
            Reporter: Li Yuanjian


This is a bug caused by abnormal scenario describe below:
 # ShuffleMapTask 1.0 running, this task will fetch data from ExecutorA
 # ExecutorA Lost, trigger `mapOutputTracker.removeOutputsOnExecutor(execId)` , 
shuffleStatus changed.
 # Speculative ShuffleMapTask 1.1 start, got a FetchFailed immediately.
 # ShuffleMapTask 1 is the last task of its stage, so this stage will never 
succeed because of there's no missing task DagScheduler can get.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to