Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/7014#discussion_r33257390
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -728,16 +730,16 @@ private[spark] class TaskSetManager(
logError("Task %d in stage %s failed %d times; aborting
job".format(
index, taskSet.id, maxTaskFailures))
abort("Task %d in stage %s failed %d times, most recent failure:
%s\nDriver stacktrace:"
- .format(index, taskSet.id, maxTaskFailures, failureReason))
+ .format(index, taskSet.id, maxTaskFailures, failureReason),
failureException)
return
}
}
maybeFinishTaskSet()
}
- def abort(message: String): Unit = sched.synchronized {
+ def abort(message: String, exception: Option[Throwable] = None): Unit =
sched.synchronized {
--- End diff --
Do we really need to have the default ` = None` for all of these cases?
This isn't so much about this particular method definition as for all of the
changes. It seems like you are supplying the exception (or `None`) in most
cases.
If it really is needed that is fine, I am just asking b/c I know I have a
habit of putting in a lot of defaults early during some refactoring just to get
things compiling, but then when its all done I've put in far more than
necessary.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]