[
https://issues.apache.org/jira/browse/SPARK-36575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
hujiahua updated SPARK-36575:
-----------------------------
Description:
When a executor finished a task of some stage, the driver will receive a
`StatusUpdate` event to handle it. At the same time the driver found the
executor heartbeat timed out, so the dirver also need handle ExecutorLost event
simultaneously. There was a race condition issues here, which will make the
task never been rescheduled again and the stage hang over.
The problem is that `TaskResultGetter.enqueueSuccessfulTask` use asynchronous
thread to handle successful task, that mean the synchronized lock of
`TaskSchedulerImpl` was released prematurely during midway
https://github.com/apache/spark/blob/branch-2.3/core/src/main/scala/org/apache/spark/scheduler/TaskResultGetter.scala#L61.
So `TaskSchedulerImpl` may handle executorLost first, then the asynchronous
thread will go on to handle successful task. It cause
`TaskSetManager.successful` and `TaskSetManager.tasksSuccessful` wrong result.
Then HeartbeatReceiver.expireDeadHosts executed killAndReplaceExecutor, which
make `TaskSchedulerImpl.executorLost` was executed twice. This cause
`TaskSetManager.tasksSuccessful` to be reduced by one
(https://github.com/apache/spark/blob/branch-2.3/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala#L948).
So `TaskSetManager.tasksSuccessful` will always less than
`TaskSetManager.numTasks`, and the stage will hang forever.
related log when the issue produce:
21/08/05 02:58:14,784 INFO [dispatcher-event-loop-8] TaskSetManager: Starting
task 4004.0 in stage 1328625.0 (TID 347212402, 10.109.89.3, executor 366724,
partition 4004, ANY, 7994 bytes)
21/08/05 03:00:24,126 ERROR [dispatcher-event-loop-4] TaskSchedulerImpl: Lost
executor 366724 on 10.109.89.3: Executor heartbeat timed out after 140830 ms
21/08/05 03:00:24,218 WARN [dispatcher-event-loop-4] TaskSetManager: Lost task
4004.0 in stage 1328625.0 (TID 347212402, 10.109.89.3, executor 366724):
ExecutorLostFailure (executor 366724 exited caused by one of the running tasks)
Reason: Executor heartbeat timed out after 140830 ms
21/08/05 03:00:24,542 INFO [task-result-getter-2] TaskSetManager: Finished task
4004.0 in stage 1328625.0 (TID 347212402) in 129758 ms on 10.109.89.3 (executor
366724) (3047/5400)
21/08/05 03:00:34,621 INFO [dispatcher-event-loop-8] TaskSchedulerImpl:
Executor 366724 on 10.109.89.3 killed by driver.
21/08/05 03:00:34,771 INFO [spark-listener-group-executorManagement]
ExecutorMonitor: Executor 366724 removed (new total is 793)
21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] DAGScheduler: Executor
lost: 366724 (epoch 417416)
21/08/05 03:00:42,360 INFO [dispatcher-event-loop-14]
BlockManagerMasterEndpoint: Trying to remove executor 366724 from
BlockManagerMaster.
21/08/05 03:00:42,360 INFO [dispatcher-event-loop-14]
BlockManagerMasterEndpoint: Removing block manager BlockManagerId(366724,
10.109.89.3, 43402, None)
21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] BlockManagerMaster:
Removed 366724 successfully in removeExecutor
21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] DAGScheduler: Shuffle
files lost for executor: 366724 (epoch 417416)
21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] DAGScheduler: Executor
lost: 366724 (epoch 417473)
21/08/05 03:00:44,584 INFO [dispatcher-event-loop-15]
BlockManagerMasterEndpoint: Trying to remove executor 366724 from
BlockManagerMaster.
21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] BlockManagerMaster:
Removed 366724 successfully in removeExecutor
21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] DAGScheduler: Shuffle
files lost for executor: 366724 (epoch 417473)
was:
When a executor finished a task of some stage, the driver will receive a
`StatusUpdate` event to handle it. At the same time the driver found the
executor heartbeat timed out, so the dirver also need handle ExecutorLost event
simultaneously. There was a race condition issues here, which will make the
task never been rescheduled again and the stage hang over.
The problem is that `TaskResultGetter.enqueueSuccessfulTask` use asynchronous
thread to handle successful task, that mean the synchronized lock of
`TaskSchedulerImpl` was released prematurely during midway
[https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskResultGetter.scala#L61].
So `TaskSchedulerImpl` may handle executorLost first, then the asynchronous
thread will go on to handle successful task. It cause
`TaskSetManager.successful` and `TaskSetManager.tasksSuccessful` wrong result,
and the task never been queued again.
related log when the issue produce:
21/08/05 02:58:14,784 INFO [dispatcher-event-loop-8] TaskSetManager: Starting
task 4004.0 in stage 1328625.0 (TID 347212402, 10.109.89.3, executor 366724,
partition 4004, ANY, 7994 bytes)
21/08/05 03:00:24,126 ERROR [dispatcher-event-loop-4] TaskSchedulerImpl: Lost
executor 366724 on 10.109.89.3: Executor heartbeat timed out after 140830 ms
21/08/05 03:00:24,218 WARN [dispatcher-event-loop-4] TaskSetManager: Lost task
4004.0 in stage 1328625.0 (TID 347212402, 10.109.89.3, executor 366724):
ExecutorLostFailure (executor 366724 exited caused by one of the running tasks)
Reason: Executor heartbeat timed out after 140830 ms
21/08/05 03:00:24,542 INFO [task-result-getter-2] TaskSetManager: Finished
task 4004.0 in stage 1328625.0 (TID 347212402) in 129758 ms on 10.109.89.3
(executor 366724) (3047/5400)
21/08/05 03:00:34,621 INFO [dispatcher-event-loop-8] TaskSchedulerImpl:
Executor 366724 on 10.109.89.3 killed by driver.
21/08/05 03:00:34,771 INFO [spark-listener-group-executorManagement]
ExecutorMonitor: Executor 366724 removed (new total is 793)
21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] DAGScheduler: Executor
lost: 366724 (epoch 417416)
21/08/05 03:00:42,360 INFO [dispatcher-event-loop-14]
BlockManagerMasterEndpoint: Trying to remove executor 366724 from
BlockManagerMaster.
21/08/05 03:00:42,360 INFO [dispatcher-event-loop-14]
BlockManagerMasterEndpoint: Removing block manager BlockManagerId(366724,
10.109.89.3, 43402, None)
21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] BlockManagerMaster:
Removed 366724 successfully in removeExecutor
21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] DAGScheduler: Shuffle
files lost for executor: 366724 (epoch 417416)
21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] DAGScheduler: Executor
lost: 366724 (epoch 417473)
21/08/05 03:00:44,584 INFO [dispatcher-event-loop-15]
BlockManagerMasterEndpoint: Trying to remove executor 366724 from
BlockManagerMaster.
21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] BlockManagerMaster:
Removed 366724 successfully in removeExecutor
21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] DAGScheduler: Shuffle
files lost for executor: 366724 (epoch 417473)
> Executor lost may cause spark stage to hang
> -------------------------------------------
>
> Key: SPARK-36575
> URL: https://issues.apache.org/jira/browse/SPARK-36575
> Project: Spark
> Issue Type: Bug
> Components: Scheduler
> Affects Versions: 2.3.3
> Reporter: hujiahua
> Priority: Major
>
> When a executor finished a task of some stage, the driver will receive a
> `StatusUpdate` event to handle it. At the same time the driver found the
> executor heartbeat timed out, so the dirver also need handle ExecutorLost
> event simultaneously. There was a race condition issues here, which will make
> the task never been rescheduled again and the stage hang over.
> The problem is that `TaskResultGetter.enqueueSuccessfulTask` use asynchronous
> thread to handle successful task, that mean the synchronized lock of
> `TaskSchedulerImpl` was released prematurely during midway
> https://github.com/apache/spark/blob/branch-2.3/core/src/main/scala/org/apache/spark/scheduler/TaskResultGetter.scala#L61.
> So `TaskSchedulerImpl` may handle executorLost first, then the asynchronous
> thread will go on to handle successful task. It cause
> `TaskSetManager.successful` and `TaskSetManager.tasksSuccessful` wrong
> result.
> Then HeartbeatReceiver.expireDeadHosts executed killAndReplaceExecutor, which
> make `TaskSchedulerImpl.executorLost` was executed twice. This cause
> `TaskSetManager.tasksSuccessful` to be reduced by one
> (https://github.com/apache/spark/blob/branch-2.3/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala#L948).
> So `TaskSetManager.tasksSuccessful` will always less than
> `TaskSetManager.numTasks`, and the stage will hang forever.
> related log when the issue produce:
> 21/08/05 02:58:14,784 INFO [dispatcher-event-loop-8] TaskSetManager: Starting
> task 4004.0 in stage 1328625.0 (TID 347212402, 10.109.89.3, executor 366724,
> partition 4004, ANY, 7994 bytes)
> 21/08/05 03:00:24,126 ERROR [dispatcher-event-loop-4] TaskSchedulerImpl: Lost
> executor 366724 on 10.109.89.3: Executor heartbeat timed out after 140830 ms
> 21/08/05 03:00:24,218 WARN [dispatcher-event-loop-4] TaskSetManager: Lost
> task 4004.0 in stage 1328625.0 (TID 347212402, 10.109.89.3, executor 366724):
> ExecutorLostFailure (executor 366724 exited caused by one of the running
> tasks) Reason: Executor heartbeat timed out after 140830 ms
> 21/08/05 03:00:24,542 INFO [task-result-getter-2] TaskSetManager: Finished
> task 4004.0 in stage 1328625.0 (TID 347212402) in 129758 ms on 10.109.89.3
> (executor 366724) (3047/5400)
> 21/08/05 03:00:34,621 INFO [dispatcher-event-loop-8] TaskSchedulerImpl:
> Executor 366724 on 10.109.89.3 killed by driver.
> 21/08/05 03:00:34,771 INFO [spark-listener-group-executorManagement]
> ExecutorMonitor: Executor 366724 removed (new total is 793)
> 21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] DAGScheduler: Executor
> lost: 366724 (epoch 417416)
> 21/08/05 03:00:42,360 INFO [dispatcher-event-loop-14]
> BlockManagerMasterEndpoint: Trying to remove executor 366724 from
> BlockManagerMaster.
> 21/08/05 03:00:42,360 INFO [dispatcher-event-loop-14]
> BlockManagerMasterEndpoint: Removing block manager BlockManagerId(366724,
> 10.109.89.3, 43402, None)
> 21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] BlockManagerMaster:
> Removed 366724 successfully in removeExecutor
> 21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] DAGScheduler: Shuffle
> files lost for executor: 366724 (epoch 417416)
> 21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] DAGScheduler: Executor
> lost: 366724 (epoch 417473)
> 21/08/05 03:00:44,584 INFO [dispatcher-event-loop-15]
> BlockManagerMasterEndpoint: Trying to remove executor 366724 from
> BlockManagerMaster.
> 21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] BlockManagerMaster:
> Removed 366724 successfully in removeExecutor
> 21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] DAGScheduler: Shuffle
> files lost for executor: 366724 (epoch 417473)
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]