[
https://issues.apache.org/jira/browse/SPARK-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
meiyoula updated SPARK-8366:
----------------------------
Summary: maxNumExecutorsNeeded should properly handle failed tasks (was:
When task fails and append a new one, the ExecutorAllocationManager can't sense
the new tasks)
> maxNumExecutorsNeeded should properly handle failed tasks
> ---------------------------------------------------------
>
> Key: SPARK-8366
> URL: https://issues.apache.org/jira/browse/SPARK-8366
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.4.0
> Reporter: meiyoula
>
> I use the *dynamic executor allocation* function.
> When an executor is killed, all running tasks on it will be failed. Until
> reach the maxTaskFailures, this failed task will re-run with a new task id.
> But the *ExecutorAllocationManager* won't concern this new tasks to total and
> pending tasks, because the total stage task number only set when stage
> submitted.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]