Merge pull request #445 from kayousterhout/exec_lost

Fail rather than hanging if a task crashes the JVM.

Prior to this commit, if a task crashes the JVM, the task (and
all other tasks running on that executor) is marked at KILLED rather
than FAILED.  As a result, the TaskSetManager will retry the task
indefinitely rather than failing the job after maxFailures. Eventually,
this makes the job hang, because the Standalone Scheduler removes
the application after 10 works have failed, and then the app is left
in a state where it's disconnected from the master and waiting to reconnect.
This commit fixes that problem by marking tasks as FAILED rather than
killed when an executor is lost.

The downside of this commit is that if task A fails because another
task running on the same executor caused the VM to crash, the failure
will incorrectly be counted as a failure of task A. This should not
be an issue because we typically set maxFailures to 3, and it is
unlikely that a task will be co-located with a JVM-crashing task
multiple times.


Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/c06a307c
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/c06a307c
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/c06a307c

Branch: refs/heads/master
Commit: c06a307ca22901839df00d25fe623f6faa6af17e
Parents: 84595ea 718a13c
Author: Reynold Xin <r...@apache.org>
Authored: Wed Jan 15 23:47:25 2014 -0800
Committer: Reynold Xin <r...@apache.org>
Committed: Wed Jan 15 23:47:25 2014 -0800

----------------------------------------------------------------------
 .../apache/spark/scheduler/TaskSetManager.scala    |  2 +-
 .../scala/org/apache/spark/DistributedSuite.scala  | 17 +++++++++++++++++
 2 files changed, 18 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


Reply via email to