[
https://issues.apache.org/jira/browse/SPARK-11334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16220881#comment-16220881
]
Apache Spark commented on SPARK-11334:
--------------------------------------
User 'sitalkedia' has created a pull request for this issue:
https://github.com/apache/spark/pull/19580
> numRunningTasks can't be less than 0, or it will affect executor allocation
> ---------------------------------------------------------------------------
>
> Key: SPARK-11334
> URL: https://issues.apache.org/jira/browse/SPARK-11334
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.4.0
> Reporter: meiyoula
> Assignee: meiyoula
>
> With *Dynamic Allocation* function, a task failed over *maxFailure* time, all
> the dependent jobs, stages, tasks will be killed or aborted. In this process,
> *SparkListenerTaskEnd* event will be behind in *SparkListenerStageCompleted*
> and *SparkListenerJobEnd*. Like the Event Log below:
> {code}
> {"Event":"SparkListenerStageCompleted","Stage Info":{"Stage ID":20,"Stage
> Attempt ID":0,"Stage Name":"run at AccessController.java:-2","Number of
> Tasks":200}
> {"Event":"SparkListenerJobEnd","Job ID":9,"Completion Time":1444914699829}
> {"Event":"SparkListenerTaskEnd","Stage ID":20,"Stage Attempt ID":0,"Task
> Type":"ResultTask","Task End Reason":{"Reason":"TaskKilled"},"Task
> Info":{"Task ID":1955,"Index":88,"Attempt":2,"Launch
> Time":1444914699763,"Executor
> ID":"5","Host":"linux-223","Locality":"PROCESS_LOCAL","Speculative":false,"Getting
> Result Time":0,"Finish Time":1444914699864,"Failed":true,"Accumulables":[]}}
> {code}
> Because that, the *numRunningTasks* in *ExecutorAllocationManager* class will
> be less than 0, and it will affect executor allocation.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]