[
https://issues.apache.org/jira/browse/SPARK-18981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
roncenzhao updated SPARK-18981:
-------------------------------
Description:
related settings:
spark.speculation true
spark.dynamicAllocation.minExecutors 0
spark.executor.cores 4
When I run the follow app, the bug will trigger.
```
sc.runJob(job1)
sleep(100s)
sc.runJob(job2) // the job2 will hang and never be scheduled
```
The triggering condition is described as follows:
condition1: During the sleeping time, the executors will be released and the #
of the executor will be zero some seconds later. The #numExecutorsTarget in
'ExecutorAllocationManager' will be 0.
condition2: In 'ExecutorAllocationListener.onTaskEnd()', the numRunningTasks
will be negative during the ending of job1's tasks.
condition3: The job2 only hava one task.
result:
In the method 'ExecutorAllocationManager.updateAndSyncNumExecutorsTarget()', we
will calculate #maxNeeded in 'maxNumExecutorsNeeded()'. Obviously,
#numRunningOrPendingTasks will be negative and the #maxNeeded will be 0 or
negative. So the 'ExecutorAllocationManager' will not request container from
yarn. The app will hang.
was:
related settings:
spark.speculation true
spark.dynamicAllocation.minExecutors 0
spark.executor.cores 4
When I run the follow app, the bug will trigger.
```
sc.runJob(job1)
sleep(100s)
sc.runJob(job2) // the job2 will hang and never be scheduled
```
The triggering condition is described as follows:
condition1: During the sleeping time, the executors will be released and the #
of the executor will be zero some seconds later. The #numExecutorsTarget in
'ExecutorAllocationManager' will be 0.
condition2: In 'ExecutorAllocationListener.onTaskEnd()', the numRunningTasks
will be negative during the ending of job1's tasks.
condition3: The job2 only hava one task.
result:
In the method 'ExecutorAllocationManager.updateAndSyncNumExecutorsTarget()', we
will calculate #maxNeeded in 'maxNumExecutorsNeeded()'. Obviously,
#numRunningOrPendingTasks will be negative and the #maxNeeded will be 0 or
negative. So the 'ExecutorAllocationManager' will not request container from
yarn. The app will be hung.
> The last job hung when speculation is on
> ----------------------------------------
>
> Key: SPARK-18981
> URL: https://issues.apache.org/jira/browse/SPARK-18981
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.0.2
> Environment: spark2.0.2
> hadoop2.5.0
> Reporter: roncenzhao
> Priority: Critical
>
> related settings:
> spark.speculation true
> spark.dynamicAllocation.minExecutors 0
> spark.executor.cores 4
> When I run the follow app, the bug will trigger.
> ```
> sc.runJob(job1)
> sleep(100s)
> sc.runJob(job2) // the job2 will hang and never be scheduled
> ```
> The triggering condition is described as follows:
> condition1: During the sleeping time, the executors will be released and the
> # of the executor will be zero some seconds later. The #numExecutorsTarget in
> 'ExecutorAllocationManager' will be 0.
> condition2: In 'ExecutorAllocationListener.onTaskEnd()', the numRunningTasks
> will be negative during the ending of job1's tasks.
> condition3: The job2 only hava one task.
> result:
> In the method 'ExecutorAllocationManager.updateAndSyncNumExecutorsTarget()',
> we will calculate #maxNeeded in 'maxNumExecutorsNeeded()'. Obviously,
> #numRunningOrPendingTasks will be negative and the #maxNeeded will be 0 or
> negative. So the 'ExecutorAllocationManager' will not request container from
> yarn. The app will hang.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]