Github user markhamstra commented on the pull request:
https://github.com/apache/spark/pull/186#issuecomment-38626677
The way things are now, it would be pretty much equivalent to cancel all
jobs immediately after the exception is captured, but in the future the
eventProcessActor's Supervisor could be supervising multiple actors (e.g.
concurrent DAGSchedulers -- don't ask me how), in which case the shutting down
of one would likely require coordinated action on others. In that kind of
future, we'd want the Supervisor to be handling the cleanup and shutdown of a
DAGScheduler, so we might as well have it do it already now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---