Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/186#issuecomment-38337225
Generally I think it's best to catch exceptions as close to where they're
thrown as you can, but only if you can somehow "recover" from the
exception. If the exception is unrecoverable though, let it go all the way
up to a catch-all for logging and then shut down the service as cleanly as
possible.
On Fri, Mar 21, 2014 at 5:08 PM, Patrick Wendell
<[email protected]>wrote:
> I am far from an expert on this part of the code, but why not just
> terminate in this case rather than continue with the DAG scheudler in a
> potentially weird an inconsistent state.
>
> An exception in the DAG scheduler is a serious failure which, in many
> cases, indicates the job is doomed and cannot complete anyways.
>
> As is, this seems like it could create some really hard-to-debug
> situations where a user had a partial failure earlier on in their
> application then they see some weird behavior later and can't figure out
> what's going on.
>
> â
> Reply to this email directly or view it on
GitHub<https://github.com/apache/spark/pull/186#issuecomment-38336957>
> .
>
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---