[ 
https://issues.apache.org/jira/browse/SPARK-11334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

meiyoula updated SPARK-11334:
-----------------------------
    Description: 
With *Dynamic Allocation* function, a task failed over *maxFailure* time, all 
the dependent jobs, stages, tasks will be killed or aborted. In this process, 
*SparkListenerTaskEnd* event will be behind in *SparkListenerStageCompleted* 
and *SparkListenerJobEnd*. Like the Event Log below:
{quote}
{"Event":"SparkListenerStageCompleted","Stage Info":{"Stage ID":20,"Stage 
Attempt ID":0,"Stage Name":"run at AccessController.java:-2","Number of 
Tasks":200}
{"Event":"SparkListenerJobEnd","Job ID":9,"Completion Time":1444914699829}
{"Event":"SparkListenerTaskEnd","Stage ID":20,"Stage Attempt ID":0,"Task 
Type":"ResultTask","Task End Reason":{"Reason":"TaskKilled"},"Task Info":{"Task 
ID":1955,"Index":88,"Attempt":2,"Launch Time":1444914699763,"Executor 
ID":"5","Host":"linux-223","Locality":"PROCESS_LOCAL","Speculative":false,"Getting
 Result Time":0,"Finish Time":1444914699864,"Failed":true,"Accumulables":[]}}
{quote}

Because that, the *

  was:With Dynamic Allocation function, a task failed over maxFailure time, all 
the dependent jobs, stages, tasks will be killed or aborted. In this process, 
SparkListenerTaskEnd event will be behind in SparkListenerStageCompleted and 
SparkListenerJobEnd. Like the Event Log below:


> numRunningTasks can't be less than 0, or it will refrect executor allocation
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-11334
>                 URL: https://issues.apache.org/jira/browse/SPARK-11334
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: meiyoula
>
> With *Dynamic Allocation* function, a task failed over *maxFailure* time, all 
> the dependent jobs, stages, tasks will be killed or aborted. In this process, 
> *SparkListenerTaskEnd* event will be behind in *SparkListenerStageCompleted* 
> and *SparkListenerJobEnd*. Like the Event Log below:
> {quote}
> {"Event":"SparkListenerStageCompleted","Stage Info":{"Stage ID":20,"Stage 
> Attempt ID":0,"Stage Name":"run at AccessController.java:-2","Number of 
> Tasks":200}
> {"Event":"SparkListenerJobEnd","Job ID":9,"Completion Time":1444914699829}
> {"Event":"SparkListenerTaskEnd","Stage ID":20,"Stage Attempt ID":0,"Task 
> Type":"ResultTask","Task End Reason":{"Reason":"TaskKilled"},"Task 
> Info":{"Task ID":1955,"Index":88,"Attempt":2,"Launch 
> Time":1444914699763,"Executor 
> ID":"5","Host":"linux-223","Locality":"PROCESS_LOCAL","Speculative":false,"Getting
>  Result Time":0,"Finish Time":1444914699864,"Failed":true,"Accumulables":[]}}
> {quote}
> Because that, the *



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to